Field of the Invention
The present invention is related to solid state imaging sensors. More particularly, the present invention is related to high sensitivity, high resolution oversampled imaging sensors.
Background Art
Known imaging sensors are composed of a light detecting element that is pixilated. Each pixel converts the incoming photons focused onto a focal plane into electrons and holes, known as photocharge. The photocharge, along with other detector induced charge is injected into the input of a readout integrated circuit amplifier, and the photocharge is stored in charge storage capacitance, typically in the vicinity of the photon detector.
Many past two-dimensional (2D) imaging sensors have individual detectors and individual circuits underneath the detectors to support the capture of the charge generated from the impinging photons. There is uncertainty in the arrival of photons in time, and uncertainty as to where the photon will land on the focal plane array. Because most sensors provide a pixel size that is within a factor of the size of the blur, which is limited by the physical sensor aperture and system optics, often times there are aliasing and artifacts associated with such imagers. In improved sensors, the limitations of the physics of light create random arrival of photons onto each pixel. In addition, with real optics and a limiting aperture, the size of a spot focused from infinity shows an Airy shaped blur, so named due to the work of George Biddell Airy. Because the imaging field has to deal with the uncertainty of photon arrival in time and space, it would be desirable to create a sensor that has improved methods to determine both the temporal and spatial random arrival effects of light in simpler manner.
Past techniques have involved missing, ignoring, or discarding imagery due to lower resolution, higher noise, or challenges with transmitting, storing, and processing high data rate imagery.
Yet other techniques have controllable resolution, but do not posses the means to first detect the entire scene at a high resolution and to then tell the imaging system to provide a lower resolution image to save on bandwidth by not transmitting unwanted imagery data. The problem with these variable resolution approaches is that if there is a new target that appears within a region of lower resolution, the target may not be detected unless another system tells the lower resolution sensor to increase its resolution or, if there is apriori information about the expected appearance of an object or target. Most sensors work in order to sense objects specifically because there is no apriori information on the target.
Image sensors have applications in many fields, including machine vision, guidance and navigation, situational awareness, and general visual security. Advanced sensors may place more image processing near the focal plane circuit, or may use algorithms which improve the resolution, sensitivity, or other performance attribute of the sensor output image.
Pixel size has been limited by the ability to create the necessary interconnect pads and deposit the metal onto the small pads. This in turn limited the ability to have smaller pixels for, for example, infrared (IR) imagers. Furthermore, since the blur size of IR sensors was typically greater than 15 um, some researchers indicated that it was unnecessary to provide for pixels that were much smaller than the pixels of IR imagers.
Efforts are being made to reduce the size power and weight of IRFPAs, and it would be highly beneficial to commercial and military imaging apparatus to also improve on the sensitivity of undersampled pixels to detect objects at longer ranges with improve probability of detection (Pd) and lower probability of false alarms (Pfa).
Yet other techniques have been provided to perform “Super Resolution” on a sequence of undersampled imagers (US 2008/0175452 A1 Jun. 2007 Ye). Such techniques generally require several sequential frames and the further computation of the different frames to estimate their exact translation or location prior to the calculation of a reconstructed superresolution frame.
Another limitation of the Super Resolution techniques for reconstruction is that because they require several frames of data, by definition they have latency in the processing and identification of potential objects or targets. In some cases, the latency results in the inability to track and identify fast moving objects in larger imagery formats.
Yet another limitation of the Superresolution reconstruction techniques is their computational complexity, so that larger computers with near real time capability or with near real time capability are required in order to construct and compute the Super Resolution image. In addition, continuously providing Super Resolution requires that each frame needs to be processed, thus increasing the computational complexity to such a degree that such processing becomes more difficult as the format of the imager increases, and also when the frame rate increases.
In the past the pixel size was limited by the ability to create the necessary interconnect pads and deposit the metal onto the small pads. This in turn limited the ability to have smaller pixels for IR imagers. Furthermore since the blur size of IR sensors was typically greater than 15 um, some researchers demonstrated the issues with undersampled imagery and aliasing and provided computational techniques such as Superresolution Reconstruction and deblurring techniques, to improve the resolution over the native detector limitations. Limitations to these Superresolution Reconstruction techniques are that they can require several image frames to create a higher effective resolution image out of several low resolution frames and, moreover, impose challenging computational complexity and costs to the system.
Another limitation of current imagers is the detection of dim targets in clutter without false alarms.
Another disadvantage of current undersampled imagers is that, when there is platform motion, turbulence, dust/haze and/or other movements, the incident photons are further distributed to other pixels in a manner that will degrade the blur size of an ideal Point Spread Function (PSF). This causes image pixel smears and integrated signal reduction, making it more difficult to detect objects under even limited motion.
What is needed is an imaging sensor that can improve the resolution and acuity of a system in a manner that does not require super resolution reconstruction and the associated challenges.
What is needed is an imaging sensor that can detect dimmer objects at longer distances without creating false positives or false alarms.
To the inventors knowledge all previous apparatus or methods for increasing the visual sensitivity in non-oversampled imagers have been limited to require external techniques to dither and then process and reconstruct the image using several lower resolution frames to improve the resolution. What is needed is a system and/or methods that provide for much higher native resolution and that simultaneously improve the probability of detection of a dim target, while suppressing false positives.