Sensors are devices that measure values of data. The data can be measured as a single value via a single sensor, such as measuring a velocity vector, or in the form of multiplicity of measurements of the data, such as measuring the intensity of light falling within a certain band or spectrum in a certain scene, or being emitted as a result of certain stimulus to an object, such magnetic resonance imaging (MRI).
Sensors can be in the form of a single-dimensional or two-dimensional array. Two dimensional array sensors have a multiplicity of “small” measuring areas, or “microsensors”. Traditionally, two-dimensional array of microsensors are fabricated to be sensitive to a certain type of measurement; that is, the array is fabricated to measure a single type of data. An example of sensor arrays that are formed from a multiplicity of microsensors is an everyday camera. Cameras contain microsensors, called pixels, that are sensitive to visible light, i.e. each microsensor measures the intensity of light falling on its surface. Each microsensor (or pixel) has a photo-electric area which is in charge of collecting incoming light and converting it to an electric signal that is a function of the intensity of that light. Therefore, when the light falls on each of the microsensors, the light is converted to an electrical signal that is read as an analog value. The choice of the microsensor whose value is to be read is done by using a decoding circuitry which chooses the row and column connected to the microsensors. The value is amplified, converted to a digital value, and stored in a memory for subsequent use in a digital computing and processing system, which can be employed in different applications.
The array of pixels can be made sensitive to different colors, by putting different light-admittance filters, one on every pixel. For example, one filter can admit light in the green color spectrum, another in the red-color spectrum, and a third in the blue-color spectrum. The Bayer pattern is a commonly used microsensor (pixel) layout for a two dimensional sensor array that are used in—visible spectrum—cameras.
Measuring two different types of data can be categorically divided into two philosophies: first, the straight-forward approach of using a separate sensor for every measurement, one for grabbing visible images, and the other for grabbing infrared data (to be used for depth computation). The second approach is to use a mix between the pixels that measure visible light and the pixels that measure infrared. The first approach has the disadvantage of employing a separate sensor. The second approach has two major drawbacks. First, it has a loss of resolution of both measurements since the pixel array is being space shared by different types of microsensors. Second where the microsensor of depth is put “under” the microsensors of visible light, a dramatic degradation of the image quality of the visible image may occur.
These and other needs are addressed by the various aspects, embodiments, and/or configurations of the present disclosure.