The need for producing good quality high resolution depth data is growing on variety of electronic devices including mobile, home entertainment, gaming, robots, drones, cars, etc. The depth data is used in many imaging and detection applications in consumer and industrial markets.
Depth camera systems can be used to capture a scene and estimate the depth (or “z-distance”) of each pixel in a scene, thereby generating a “depth map,” an example of which is shown in FIG. 1. Some depth camera systems utilize stereo vision techniques (e.g., using multiple cameras) in which depth data is computed based on the disparity between matching features found in the images captured by the multiple cameras. However, in low light or low texture environments, it may be difficult to detect such features.
Generally depth camera systems (or more simply “depth cameras”) can be classified into passive depth cameras and active depth cameras.
Active depth cameras generally include an illumination component which emits light onto a scene. Broadly, these include: “time-of-flight” active depth cameras, which emit diffuse modulated illumination onto the scene; and “structured light” active depth cameras, which emit an illumination pattern in order to project a textured pattern onto the scene, which assists in the determination of the disparities for general cases (e.g., by providing additional texture to low texture objects), and which also allows operation under insufficient ambient lighting (e.g., in dark environments).
Generally, an active depth camera includes an active illumination component (or projection module or projector) SI, an image acquisition component SA, and a processing component SP, where the processing component implements a depth estimation algorithm. The illumination system SI illuminates the scene with diffuse or collimated light, which can be constant over time, pulsed, or otherwise modulated. The illumination may be concentrated in at a single wavelength or span a range of wavelengths. The image acquisition component SA is configured to image a scene in the direction along which the active illumination component SI emits light (e.g., the emission optical axis or the projection optical axis).