Depth camera systems capture a scene and estimate the depth (or “z-distance”) of each pixel in a scene, thereby generating a “depth map,” an example of which is shown in FIG. 1. Generally depth camera systems (or more simply “depth cameras”) can be classified into passive depth cameras and active type depth cameras.
Active type depth cameras generally include an illumination component which emits light onto a scene. Broadly, these include “time-of-flight” active depth cameras, which emit diffuse modulated illumination onto the scene and “structured light” active depth cameras, which emit a collimated illumination pattern.
Generally, an active depth camera includes an active illumination component SI, an image acquisition component SA, and a processing component SP, where the processing component implements a depth estimation algorithm. The illumination system SI illuminates the scene with diffuse or collimated light, which can be constant over time, pulsed, or modulated. The illumination may be concentrated in a single wavelength or span a range of wavelengths.
Some active illumination components SI use a light emitter such as a laser and one or more optical elements to generate a collimated beam having a pattern. Commonly, one or more diffractive optical elements are used to replicate an incident collimated beam over a collection of collimated beams which comprise the illumination pattern. FIG. 2 is an example of a pattern emitted by an illumination component SI. As seen in FIG. 2, there is a bright spot (e.g., a large white spot) in the center of the pattern. This bright spot is often called the “zero-order” or “0th order” and is the result of direct propagation of the incident collimated beam upon the diffractive element(s) traveling through the optical elements that generate the pattern. In many cases, 1% to 5% (or more) of the optical energy emitted by the light emitter is concentrated in the zero-order spot, and the zero-order spot may be 100 to 500 times brighter than any other portion of the pattern. This high concentration of optical energy in one location is a limiting factor or bottleneck for generating practical patterns because, for example, an excessively strong collimated zero order may not pass consumer electronics laser safety requirements.
In general, it is difficult or impossible to fully eliminate the zero order in a manufacturing setting. This is because manufacturing tolerances, light source wavelength variation, and other factors in practice result in appearance of a zero order, even if the zero order is absent from the abstract design.
In addition, integrating a depth camera 102 including an illumination component 106 into a portable computing device such as a laptop computer, smartphone, and other mobile device, as shown for example in FIG. 3A, the thickness (or z-thickness or z-height) of the depth camera along its optical axis may be limited by the desired form factor of the computing device (e.g., a thickness less than 3.5 mm for the illuminator). In addition, these portable computing devices are currently under market pressure to be smaller and thinner. FIG. 3B is a schematic diagram of an image acquisition component SA and an active illumination component SI, with x, y, and z axes labeled. As used herein, the z axis corresponds to the main optical axis of the element, e.g., the axis along the field of view of the image acquisition component SA and the axis along which the illumination component SI emits light.
Generally, an illumination component SI used in such systems has a co-linear optics package in which a light emitting component (e.g., a laser) is aligned on the same axis as various optical components such as a refractive lens and a separate diffractive optical element for generating the pattern. Such an illumination component generally has a thickness of at least 7 mm along the direction of the emission optical axis (or projection optical axis).