Active sensors, such as light detection and ranging (LIDAR) sensors, radio detection and ranging (RADAR) sensors, and sound navigation and ranging (SONAR) sensors, among others, can scan an environment by emitting signals toward the environment and detecting reflections of the emitted signals. Passive sensors, such as image sensors and microphones among others, can detect signals originating from sources in the environment.
An example LIDAR sensor can determine distances to environmental features while scanning through a scene to assemble a “point cloud” indicative of reflective surfaces. Individual points in the point cloud can be determined, for example, by transmitting a laser pulse and detecting a returning pulse, if any, reflected from an object in the environment, and then determining a distance to the object according to a time delay between the transmission of the pulse and the reception of its reflection. Thus, a three-dimensional map of points indicative of locations of reflective features in the environment can be generated.
An example image sensor can capture an image of a scene viewable to the image sensor. For instance, the image sensor may include an array of complementary metal oxide semiconductor (CMOS) active pixel sensors, or other types of light sensors. Each CMOS sensor may receive a portion of light from the scene incident on the array. Each CMOS sensor may then output a measure of the amount of light incident on the CMOS sensor during an exposure time when the CMOS sensor is exposed to the light from the scene. With this arrangement, an image of the scene can be generated, where each pixel in the image indicates one or more values (e.g., colors, etc.) based on outputs from the array of CMOS sensors.