Noise reduction in image processing is widely known, and methods for noise reduction are abundant. In the past, different noise reduction methods were often adapted to filter out a specific type of noise. For example, non-linear filters, such as median filters, are well suited for filtering out “shot noise”, which affects the pixel value of individual pixels in a random fashion, whereas linear filters, such as mean filters, are better suited for filtering out Gaussian noise. These filters are, in effect, low-pass filters to be used to blur the image. Thus, the size of the pixel array, or stencil, of the filters is generally kept small so that the blurring does not significantly degrade the image.
With the advent of digital cameras, a new type of pixel sampling scheme (Bayer matrix) has become popular, where only a subset of the pixels is sampled for each color component. Before the images taken through a Bayer matrix can be used, the color components have to be up-sampled by means of interpolation. The format of images taken through a Bayer matrix is shown in FIG. 1. When color components are separated from the Bayer matrixed image as shown, each color component has many missing pixels and the pixel locations of one color component are different from the pixel locations of another color component. As shown in FIG. 1, one half of the pixel lines in a Bayer matrixed image contain blue color pixels interlaced with green color pixels, and the other half of the pixel lines contain red color pixels interlaced with green color pixels. Thus, the missing pixels in the color components are 50% in the green component, and 75% in either the red or blue component. The interpolation of each color component, known as color filter array (CFA) interpolation or de-mosaic interpolation, smears the noise from a pixel to neighboring pixels. As a result, conventional noise filtering methods become less effective in filtering out noise in such a digital image. Furthermore, when the conventional noise-filtering methods are applied directly on the color components prior to CFA interpolation, the stencil for the noise filters must be widened because of the missing pixels. As a result, extensive blurring occurs, especially around the edge-like structures in the image. When color filtering is applied to an interpolated image, approximately three times as much processing power is required to carry out the filtering operation compared to performing the filtering before the interpolation.
Okisu (U.S. Pat. No. 6,091,862) discloses a method and device for pixel interpolation in the “up-sampling” process to fill in the pixel value of the missing pixels. According to the method of Okisu, when the pixel value of a missing pixel in a color component is calculated for pixel interpolation, the slopes of pixel values in the same color component around that pixel are used to provide pixel weighting factors in order to improve the interpolated result. While Okisu can reduce smoothing of edges, its effect is limited because of the missing pixels in the color component under pixel interpolation.
Noise filtering a color component using only the pixel values of the same color is known, especially when a mean filter or a median filter is used. FIGS. 1 to 3b are used to illustrate how different noise reduction filters can be used to filter different color components. FIGS. 1 to 3b show a section of a color image taken through a Bayer matrix. In particular, FIG. 2a shows a section of the Bayer matrixed image with relevant pixels used when a green pixel in a blue line is filtered. FIG. 2b shows a section of the Bayer matrixed image with relevant pixels when a green pixel in a red line is filtered. In both FIGS. 2a and 2b, the pixel value of the green pixel to be filtered is denoted by the letter O. The pixel values of the surrounding green pixels are denoted by letters A, B, C, D, E, F, G and H, whereas the pixel values of the surrounding red and blue pixels are denoted by R1-R6, B1-B6.
To noise filter the green color component, for example, only the pixel value of green pixels is used. To generalize the noise filtering process, it can be said that the green pixel having the pixel value O is filtered by a filter F( ), and the pixel value of the filtered pixel is denoted by a notation F(O). If the pixels to be used in noise filtering are confined to the area as shown in FIGS. 2a or 2b, then the filter F( ) is a combination of pixel values selected only from A, B, C, D, E, F, G and H. The filter F( ) is not part of the present invention. Thus, F( ) can be a prior art noise filter or any combination of prior art filters, or even any novel filter suitable for noise reduction.
Likewise, when a red color component is subject to noise filtering, only the pixel value of red pixels is used. When a blue color component is subject to noise filtering, only the pixel value of the blue pixels is used. FIG. 3a shows a section of the Bayer matrixed image with relevant pixels when a red pixel is filtered. In FIG. 3a, the pixel values of the surrounding red pixels are denoted by letters A, B, C, D, E, F, G and H, whereas the pixel values of the surrounding green and blue pixels are denoted by G1-G4 and B1-B4. FIG. 3b shows a section of the Bayer matrixed image with relevant pixels when a blue pixel is filtered. In FIG. 3b, the pixel values of the surrounding blue pixels are denoted by letters A, B, C, D, E, F, G and H, whereas the pixel values of the surrounding green and red pixels are denoted by G1-G4 and R1-R4.
In prior art, there are two different classes of spatial noise reduction filters used. One is classified as nonlinear and the other linear. A median filter is an example of the non-linear filter, and a mean filter is an example of the linear filter. The combination of a non-linear filter and a linear filter is another non-linear filter, which can also be used in noise filtering. A median filter effectively removes impulsive noise, whereas a mean filter effectively removes Gaussian noise. For a human observer, sharp edges are important due to the properties of the Human Visual System (HVS). Thus, the implementation of filters should take into account not only the noise reduction aspect but also the edge preservation aspect of the filtering process. Furthermore, low processing power and low memory consumption should also be considered and, therefore, the size of the filter window must be as small as possible. As presented above, combining these two requirements is a problem in methods of prior art.
1.0 Multistage Median Filter
The multistage median filter, as described below, consists of line- and edge-preserving properties, and it still effectively reduces noise. The basic components of this multistage median filter consist of four elements: a five-point “+”-median filter, a five-point “x”-median filter, the original pixel and a three-point median filter of the previous three.
Thus, when a pixel having the pixel value O as shown in FIGS. 2a to 3b, we have:
 “+”-med=median5(O, A, B, C, D)  (1.1)“×”-med−median5(O, E, F, G, H)  (1.2)original=O  (1.3)output value=F(O)=median3(“+”-med, “×”-med, original)  (1.4)
This filter reduces impulse-like noise effectively, and it can be used to attenuate Gaussian noise as well. Edges and lines are also effectively preserved.
2.0 Mean Filter
A mean filter usually consists of a structure, where the weighted average of pixels is calculated. Usually, only the nearest pixels are used in the filter window. Thus, the smoothing of edges and lines is minimal and effective noise reduction properties are preserved. Mean filters can be classified into non-directional and directional filters, as described below.
2.1 Non-directional Mean Filter
When green pixels are filtered (FIG. 2a or 2b), we can use, for example:outputG=(4*O+E+F+G+H)/8  (2.1.1)
When red (or blue) pixels are filtered (FIG. 3a or 3b), we can use, for example:outputC=(4*O+A+B+C+D)/8  (2.1.2)
This filter reduces Gaussian noise effectively and attenuates impulse-like noise. However, being a non-directional filter, this mean filter does not effectively preserve lines and edges.
2.2 Directional Mean Filter
In order to preserve edges and lines more effectively, it is preferred to use directional mean filters as follows:
The green pixels are filtered using, e.g., one of two following filters (FIGS. 2a and 2b):output1G=(2*O+E+H)/4  (2.2.1)oroutput2G=(2*O+F+G)/4.  (2.2.2)
The red (or blue) pixels are filtered using, for example, one of the two following filters (FIGS. 3a and 3b):output1C=(2*O+A+B)/4  (2.2.3)oroutput2C=(2*O+C+D)/4.  (2.2.4)
Lowpass spatial filters, similar to those described hereinabove, and highpass spatial filters can be found in “Digital Image Processing” by R. C. Gonzalez and R. E. Woods (Addison Wesley Longman, 1993, pp. 189-201).