One of the hot topics in modern day imaging techniques is 3D imaging.
The most straightforward way of obtaining 3D images is by taking two or more images from different viewpoints and constructing from the two or more images a 3D image. Such techniques basically mimic the human vision system. p The disadvantage of such techniques is that one needs two cameras and one needs to know the distance between the two cameras, their focal lengths and the lens distortions, and combine the images taken to produce 3D information.
There is a need for imaging techniques that allow 3D information to be obtained using a single camera, or at least a single lens.
A number of methods are known which use only a single camera, or a camera assisted with a static pattern projector. The methods can be divided in three groups: triangulation based, de-focus based and time-of-flight.
In triangulation-based methods, the depth is estimated from the local disparities between a projected pattern and an acquired pattern, i.e., the image the projected pattern gives on the objects in the images. The distortions of such patterns provide for an estimate of the distance. The disadvantage is that a pattern is to be projected and that at best an estimate is made that provides for some sort of indirect estimate providing a probabilistic estimate, not a real estimate.
In depth-of-focus methods, a camera is focused at a certain distance, and the depth map can be estimated by computing locally an amount of de-focus caused by deviations of the actual distances to the object from the distance of perfect focus. Again a probabilistic estimate is provided and, in principle, there is a duality in the outcome since there is no way of distinguishing with any certainty whether the out-of-depth focus is due to an object being in front of or further away than the plane of focus.
In contrast to this, time-of-flight methods do provide a realistic estimate of the distance to the camera. In ‘time of flight’ methods, the object is illuminated with 5-50 ns light pulses. The special camera sensor then measures the delay between the emitted and reflected pulses which grows with the distance to the object. An example of such a camera system is described in “A 3-D time of flight camera for object detection” by Ringbeck et al, Optical 3D measurement techniques 09-12.07.2007 ETH, plenary session 1: Range Imaging 1. However, the method requires sophisticated techniques and is not suited for all distances, the range is typically a few meters to 60 meters, small distances are difficult to measure.