A depth camera system or a range camera can be used to capture depth information about a scene. In particular, a depth camera system can generate a two-dimensional image or “depth map” where each value in the image corresponds to the distance between the depth camera and a portion of the scene that is in the field of view of the camera. The depth information may also be referred to as three-dimensional information, and the resulting depth map may be referred to as a three-dimensional reconstruction of the scene. This can be contrasted with a traditional camera, which captures the amount of light received from portions of the scene in the field of view of the camera, but not the distances of the objects and other features of the scene.
One class of depth camera systems uses a projection system or projection source to assist in the reconstruction of the depth information by projecting light onto a scene. Such systems may be referred to herein as being “active,” as contrasted with “passive” depth camera systems that do not include a projection system. These include: projecting an encoded pattern, such as those commonly used in structured-light methods; projecting a pattern to create a texture on the scene; and projecting a pattern that is designed or optimized for three-dimensional reconstruction. Projecting a texture or a pattern designed for three-dimensional reconstruction is typically used with systems that include two or more cameras.
The aforementioned methods therefore rely on projection system to project a pattern in the visible or invisible portion of the optical spectrum. In many such cases, the light source used for the projection is a coherent laser light source. The advantages of using such a light source include energy efficiency, compact physical size, reliability, and ease of system integration to name a few. Furthermore, this coherent light source can be used in conjunction with diffractive optics to produce a pattern of collimated spots, collimated lines, or an uncollimated diffuse pattern.
The projection pattern is then imaged by an appropriate imaging sensor, such as a complementary metal oxide semiconductor (CMOS) image sensor or a charge coupled device (CCD) image sensor. In most applications, these imaging sensors are sensitive to the intensity of light and not the phase of light, and have a finite detection aperture.
When such an imaging sensor is used to image a projection pattern produced by coherent light, such as coherent laser light, the image will generally include speckle noise. (See, e.g., Dainty, J. Christopher. “Laser speckle and related phenomena.” Berlin and New York, Springer-Verlag (Topics in Applied Physics. Volume 9), 1975. 298 p. Vol. 9. 1975; J. W. Goodman, “Some Fundamental Properties of Speckle,” J. Opt. Soc. Am. 66, pp. 1145-1149, 1976; and Wang, Lingli, et al. “Speckle reduction in laser projection systems by diffractive optical elements.” Applied optics 37.10 (1998): 1770-1775.) The speckling becomes especially evident when the laser projection is imaged using compact camera systems that have a small collection aperture. Furthermore, the speckle noise is angle dependent in that two different image sensors viewing the same coherent pattern from two angles will observe different speckle noise.