Low light level imaging devices are useful in a wide variety of applications. For example, low light level imaging devices are useful in nighttime surveillance activities. In low light level conditions, it is important for sensors to take advantage of every available photon. Approaches to providing human perceptible images in low light conditions have included the use of image intensifiers. Other approaches to providing images in low light conditions have utilized the detection of light having wavelengths outside of the visible spectrum. Although such approaches have had success in providing human perceptible images of scenes that could not otherwise be viewed by the human eye, such approaches have been unable to provide chrominance information in combination with high sensitivity at low light levels.
Image intensifiers are generally formed using microchannel plates. In particular, a photocathode is positioned near a honeycomb of small channels (or microchannels) upon which a voltage gradient is imposed. When a photon collides with the photocathode, an electron is released and is accelerated along a channel. The electron is focused on a phosphorous screen, which produces photons in response to being bombarded by the electrons ejected by the microchannel. The resulting image on the phosphorous screen may be viewed directly, or may be converted into digital information by a imaging device, such as a charge coupled device (CCD).
The amplification of light provided by an image intensifier is effective in providing views of scenes at low light levels. However, the use of a phosphorous screen results in a monochromatic image. In addition, the limited resolution of the microchannel element in turn limits the image resolution available at the phosphorous screen. Also, a “halo” effect can occur when electrons bounce off the mouth of a channel and hit a neighbor channel. Furthermore, image intensifiers require a relatively high voltage for operation, and have a finite life span.
Another approach to providing high sensitivity imaging devices in low light conditions is to utilize image sensors that are capable of detecting light falling outside of the normal range of human vision. For example, typical nighttime scenes are relatively rich in infrared light wavelengths. Therefore, by detecting infrared wavelengths and providing the detected infrared information as luminance (or brightness) information to a human perceptible display, high sensitivity may be obtained. However, systems utilizing imagers that are sensitive to infrared wavelengths do not provide information regarding the colors of objects present in the imaged scene.
As a further alternative, imaging devices or cameras utilizing three image sensors or chips for detecting color information, and a fourth chip for detecting luminance information have been proposed. However, multichip designs are difficult to manufacture and implement. In addition, the ability of such designs to provide high sensitivity is compromised, by splitting the light gathered by the device's lens system among four different imagers. Furthermore, the use of four separate image sensors results in an overall package that is relatively large and expensive to produce.
In consumer and military applications, it is desirable to provide imaging devices that are relatively small and light, and that use relatively little power. Accordingly, most consumer imaging devices and many imaging devices designed for military applications utilize a single image sensor. As a result, existing imaging devices that provide color information are relatively insensitive in low light conditions, while imaging devices optimized for high sensitivity and low light conditions typically provide a monochromatic image.
The visual tasks of detection and recognition can be greatly aided if color information is provided to the viewer. Imaging devices capable of providing color information typically do so by separately sampling light having bands of color centered on the red, green, and blue portions of the spectrum. However, because filtering light requires the rejection of at least some components of the light incident on the filter, filtering reduces the sensitivity that might otherwise be available from an imaging device. One approach to increasing the light sensitivity of an image sensor used in connection with a color imaging device is described by Bayer in U.S. Pat. No. 3,971,065. The filter, known as a Bayer filter, disclosed therein establishes pixel subsets distributed across the entire array of pixels in an image sensor. Each subset of pixels consists of one pixel having a filter that admits red light, one pixel having a filter that admits blue light, and two pixels having filters that admit green light. The Bayer filter favors green filter elements because green is the main contributor to the luminance information in a scene. This preference for luminance (or brightness) information over chrominance information provides an image sensor with greater sensitivity and resolution. However, because at least some filtering is performed before photons reach the pixels of the image sensor, the sensitivity of devices employing Bayer filtration could be improved.
It would be desirable to provide an electronic color imaging device that is capable of providing high sensitivity. In addition, it would be advantageous to provide such a device that utilized a single image sensor. Furthermore, it would be desirable to provide such a device that was relatively inexpensive to implement and easy to manufacture.