There are so many different three-dimensional imaging systems as well as so many different ways to represent an object in three-dimensional space. The most popular three-dimensional imaging system is stereoscopic imaging systems, which acquire depth information from a scene in the form that the parallax phenomenon of human eyes is simulated. When human eyes see a scene, right and left side eyes have two different perspectives due to their separation. The brain fuses these two perspectives and assesses the visual depth. Similarly to human eyes, stereoscopic three-dimensional imaging systems take two perspective images by two parallel cameras that are disposed to view the scene from different angles at the same time as disclosed in U.S. Pat. No. 5,432,712 to Chan. These devices, however, tend to be large and heavy, and come at high cost due to multiple camera systems and their optical axis separation requirement. Also, when stereoscopic images are displayed for three-dimensional viewing, many technical problems can be arisen involved with arbitrary distribution of the viewer's position, watching by multiple viewers, binocular disparity due to deviations in the distance between the two eyes, vergence, fatigue accumulation in the eye, accommodation, the relative position change of the three-dimensional image due to viewer's movement, etc.
U.S. Pat. No. 6,503,195 to Keller discloses a structured light depth extraction system in which a projector projects a structured light pattern such as grids in the visible or invisible form onto an object, and then an image processor calculates depth information based on the reflected light pattern. In case of using visible light, image quality can be degraded while using invisible light requires an additional sensor system. Also, performance of the structured light depth extraction system depends on the reflectivity of the object.
U.S. Pat. No. 3,506,327 to Leith discloses a holographic imaging system, which uses coherent radiation to produce an object-bearing beam and reference beam. These two beams produce a pattern of interference fringe on the detector, wherein the intensity and phase information of light are recorded. Three-dimensional image can be reconstructed by illuminating the pattern of interference fringe with the reference beam. The maximum image depth is limited by mainly the coherence length of the beam. The holographic imaging system requires expensive and high power consuming coherent light source such as laser and the near darkroom conditions for imaging. Therefore, the holographic imaging system is not applicable to portable imaging devices as well as may cause some safety concerns using in the public area.
U.S. Pat. No. 5,032,720 to White and U.S. Pat. No. 6,949,069 to Farkas disclose a three-dimensional confocal system in which a point of interest is illuminated by a light source using a pinhole aperture. The confocal system can provide a high resolution three-dimensional image with a single camera system, but most of illuminating light is wasted and causes noise problem. To overcome this, U.S. Pat. No. 6,749,346 to Dickensheets and U.S. Pat. No. 6,563,105 to Seibel use a single optical fiber to scan and collect reflected light, but point by point scanning can lead to a slow image refresh rate.
The depth from focus criteria is well known for three-dimensional imaging, wherein a sequence of images is taken by changing the camera focus and in-focus regions are extracted from the images. Camera focus can be changed in many different ways. U.S. Pat. No. 5,986,811 to Wohlstadter discloses a three-dimensional imaging method and system using conventional motorized optics having an input lens and an output lens to change the focal length of the imaging system. Conventional motorized optics has a slow response time and complex driving mechanisms to control the relative position of the lenses. Therefore, it is difficult to use in the real-time imaging system and miniaturize the imaging system.
U.S. Pat. No. 6,344,930 to Kaneko discloses a total-focus imaging system using a sealed liquid lens actuated by a piezoelectric actuator to change the focal length of the imaging system. The proposed liquid lens has a slow focal length change speed of several hundreds of Hz. The system can have only a half dozen of focal length changes for each three-dimensional image when considering the standard video or movie rate. Besides, the lens has a small focal length variation range. These problems limit the possible range of depth and the depth resolution of the three-dimensional image.
A most advanced variable focal length lens is a liquid crystal variable focal length lens, wherein its focal length is changed by modulating the refractive index. However, it has a complex mechanism to control it and a slow response time typically on the order of hundreds of milliseconds, while the fastest response liquid crystal lens has a response time of tens of milliseconds, which still provides a low depth resolution three-dimensional image. Also, it has a small focal length variation and a low focusing efficiency.
A high speed, large variation of numerical aperture, and large diameter of variable focal length lens is necessary to get a real-time, large range of depth, and high depth resolution three-dimensional image.