Focal plane array (FPA) sensors are widely used in visible-light and infrared imaging systems. More particularly, FPA's have been widely used in military applications, environmental monitoring, scientific instrumentation, and medical imaging applications due to their sensitivity and the low cost. Most recently research has focused on embedding powerful image/signal processing capabilities into FPA sensors. An FPA sensor comprises a two-dimensional array of photodetectors placed in the focal plane of an imaging lens. Individual detectors within the array may perform well, but the overall performance of the array is strongly affected by the lack of uniformity in the responses of all the detectors taken together. The non-uniformity of the responses of the overall array is especially severe for infrared FPA's.
From a signal processing perspective, this non-uniformity problem can be restated as how to automatically remove fixed-pattern noise at each pixel location. The FPA sensors are modeled as having fixed (or static) pattern noise superimposed on a true (i.e., noise free) image. The fixed pattern noise is attributed to spatial non-uniformity in the photo-response (i.e., the conversion of photons to electrons) of individual detectors in an array of pixels which constitute the FPA. The response is generally characterized by a linear model:zt(x,y)=gt(x,y)·st(x,y)+bt(x,y)+N(x,y),  (1)where N(x,y) is the random noise, zt(x,y) is the observed scene value for a pixel at position (x,y) in an array of pixels (image) that are modeled as being arranged in a rectangular coordinate grid (x,y) at time t, st(x,y) is the true scene value (e.g., irradiance collected by the detector) at time t, gt(x,y) is the gain of a pixel at position (x,y) and time t, and bt(x,y) is the offset of a pixel at position (x,y) at time t. gt(x,y) can also refer to as a gain image associated with noise affecting the array of pixels, and b(x,y) the offset image of pixels associated with noise added to the pixels. The task of non-uniformity correction (NUC) algorithms is to obtain st(x,y) via estimating the parameters g(x,y) and b(x,y) from observed zt(x,y). Hereinafter g(x,y) and b(x,y) will be referred to as the gain and offset, respectively, and the random noise term N(x,y) will be ignored. Generally speaking, gain and offset are both a function of time, as they drift slowly along with the temperature change.
Two-point and one-point non-uniformity correction (NUC) are techniques commonly used to counteract fixed pattern noise. Two-point NUC solves for the two unknowns g(x,y) and b(x,y) for all the (x,y) pixels in Equation 1 by processing two images taken of two distinct sources, e.g., two uniform heat sources in an infrared imaging system (i.e., a “hot” source and a “cold” source), or a “light” image and a “dark” image in an optical imaging system. Since two distinct sources are hard to maintain, camera manufacturers use one source to counteract offset drift in real time application, which is often referred to one-point NUC. In a one-point NUC, gain information is stored in a lookup table as a function of temperature, which can be loaded upon update. Given the gain, Equation 1 is solved to obtain the offset b(x,y). Both calibration processes need to interrupt (reset) real time video operations, i.e., a calibration needs to be performed every few minutes to counteract the slow drift of the noise over time and ambient temperature. This is inappropriate for applications such as visual systems used on a battlefield or for video surveillance.
Scene-based NUC techniques have been developed to continuously correct FPA non-uniformity without the need to interrupt the video sequence in real time (reset). These techniques include statistical methods and registration methods. In certain statistical methods, it is assumed that all possible values of the true-scene pixel are seen at each pixel location, i.e., if a sequence of video images are examined, each pixel is assumed to have experienced a fill range of values, say 20 to 220 out of a range of 0 to 255. Said another way, these statistical methods assume global constant statistics. Based on this assumption, the offset and gain are related to the temporal mean and standard deviation of the pixels at the pixel locations (x,y). Global constant-statistics (CS) algorithms assume that the temporal mean and standard deviation of the true signals at each pixel is a constant over space and time. Furthermore, zero-mean and unity standard deviation of the true signals st(x,y) are assumed, such that the gain and offset at each pixel are related to mean and standard deviation by the following equations:
                                                        b              ⁡                              (                                  x                  ,                  y                                )                                      ≅                          m              ⁡                              (                                  x                  ,                  y                                )                                              =                                    ∑                              t                =                0                                            T                -                1                                      ⁢                                                            z                  t                                ⁡                                  (                                      x                    ,                    y                                    )                                            T                                      ,                                  ⁢                                            ∑                              x                ,                y                                      ⁢                                          b                ⁡                                  (                                      x                    ,                    y                                    )                                            T                                =          0                ,                            (        2        )                                                                    g              ⁡                              (                                  x                  ,                  y                                )                                      ≅                          σ              ⁡                              (                                  x                  ,                  y                                )                                              =                                                                      ∑                                      t                    =                    0                                                        T                    -                    1                                                  ⁢                                                      (                                                                                            z                          t                                                ⁡                                                  (                                                      x                            ,                            y                                                    )                                                                    -                                              m                        ⁡                                                  (                                                      x                            ,                            y                                                    )                                                                                      )                                    2                                                            T                -                1                                                    ,                                  ⁢                                            ∑                              x                ,                y                                      ⁢                                          g                ⁡                                  (                                      x                    ,                    y                                    )                                            T                                =          1                ,                            (        3        )            where m(x,y) is the temporal mean at (x,y) and σ(x,y) is the temporal standard deviation at (x,y). T is the number of frames. Both mean and deviation can be obtained recursively. The estimated true signal at (x,y) is expressed as:
                                                        s              ^                        t                    ⁡                      (                          x              ,              y                        )                          =                                                            z                t                            ⁡                              (                                  x                  ,                  y                                )                                      -                          b              ⁡                              (                                  x                  ,                  y                                )                                                          g            ⁡                          (                              x                ,                y                            )                                                          (        4        )            
Registration methods assume that when images are aligned to each other, then aligned images have the same true-scene pixel at a given pixel location. Even if a scene is moving, when a pixel is aligned in all of the images, it will have the same value.
In general, statistical methods are not computationally expensive, and are easy to implement. But statistical methods based on global constant statistics requires many frames and the camera needs to move in such way as to satisfy the statistical assumption. Registration methods require fewer frames. However, they rely on accurate global motion estimation, and are computationally expensive. The assumption of the same true-scene pixel in the aligned image breaks down when the true signal response is affected by lighting change, automatic gain control (AGC) (which automatically adjusts the saturation, hue, brightness ) of the camera, and random noise.
A problem with both the statistical and registration approaches is that they tend to exhibit “ghosting” artifacts when a scene remains stationary or a camera freezes. A simple de-ghosting method is to detect changes in a sequence of images and to ignore a particular image if the change from the previous image is less than a threshold. “Ghosting” artifacts occur in processed images when the global constant-statistics assumption in statistical approach is broken, i.e., when the range of possible values of the true-scene pixel is different at various pixel locations. This means that, at one set of locations, the pixels experience a range of values, say 20 to 150, but at another set of locations in the same image, the pixels experience a different range of values, say 50 to 200. An illustrative example of the breakdown of the local constant statistics assumption producing “ghosting” artifacts is illustrated in FIGS. 1A and 1B, wherein a video sequence is taken by a camera on a moving vehicle. In FIG. 1A, a sample image shows part sky 2 and part trees 3, and part ground 4, overlayed with fixed pattern noise 6. The resulting average image of the sequence (2,000 frames) in FIG. 1B, assuming a global mean, shows an upper bright area 8, dark middle area 10, and a bottom gray area 12. Since a global constant statistical method assumes a constant mean, these spatial variation in the average image are interpreted as offset. This so-called “over-shoot” leaves a reverse ghost image.
Accordingly, what would be desirable, but has not yet been provided, is a statistical NUC method for eliminating fixed pattern noise in imaging systems that is not susceptible to “ghosting.”