In scanning laser projectors, images are projected by scanning laser light into a pattern, with individual pixels generated by modulating light from laser light sources as a scanning mirror scans the modulated light in the pattern. Depth mapping sensors have been developed to generate 3D maps of surfaces, where the 3D maps describe the variations in depth over the surface. Past attempts to combine scanning laser projectors with depth mapping has been constrained by power limitations. Specifically, scanning laser projectors in general are constrained by power limitations. Combining laser projection and laser depth sensing will further be constrained by power limitations.
Furthermore, some previous depth mapping sensors have been limited in flexibility. For example, typical depth mapping sensors have been limited to generating 3D maps with specific resolutions.
For example, some depth mapping sensors use CMOS imaging sensors to receive light reflected from the surface, and then generate the 3D map from that received light. These depth mapping sensors can determine the time-of-flight for light reflected from the surface and received by the CMOS imaging sensor, and use that determined time-of-flight to generate the 3D map.
However, such CMOS imaging sensors typically have a fixed horizontal and vertical resolution. Thus, depth mapping sensors that use such CMOS imaging sensors are limited to providing 3D maps with resolutions that are less than or equal to the horizontal and vertical resolution of the CMOS imaging sensors.
As such, there remains a need for improved devices to combine scanning laser projectors with depth mapping. And there remains a need for improved devices and methods for depth mapping, and in particular a need for depth mapping with improved flexibility.