Optical sensing measures distances to a scene by illuminating the scene with optical signal and analyzing time-of-flight (TOF) of the reflected light. One optical sensing technique, e.g., light radar (Lidar), can be used with applications such as geoscience, remote sensing, airborne laser swath mapping (ALSM), laser altimetry, contour mapping, and vehicle navigation. Conventional high-resolution, high frame-rate optical based systems typically use an array of precision sensors and illuminate the scene with singular pulses transmitted by a stationary laser or other light source. Alternatively, at the expense of reduced frame-rate, a laser scans the scene.
Compressive sensing uses fewer linear measurements than implied by the dimensionality of an acquired signal. To reduce the acquisition rate, compressive sensing reconstruction methods exploit the structure of signals. To capture the structure, the most commonly used signal model is sparsity. Compressive sensing can exploit significant gains in computational power due to the reduced sensing cost, and allow elaborate signal models and reconstruction methods, which, in turn, enable reduced sensor complexity. For example, some compressive depth sensing systems use a single sensor combined with a spatial light modulator and multiple pulses illuminating the scene.
However, even if the usage of compressive sensing can reduce the cost of TOF sensors, some TOF sensors still require a relatively expensive hardware. Accordingly, there is a need for a different architecture of a TOF sensor that can lead to reduction of the manufacturing cost.