The quality of an image may be degraded by noise signals found in the digital signal encoding the image. These noise signals may originate from image sensors. For example, sensor noise may be due to insufficient lighting when the image is captured. This sensor noise appears in particular as more or less visible and more or less colored grains.
A degradation in quality may also result from digital processing applied to the information encoding the image. This is called “digital noise”. For example, compression may result in a degradation of image quality. This is called “compression noise”.
It is known to attempt to correct the noise that degrades an image by applying a low-pass noise reduction filter whose purpose is to attenuate the high frequencies that represent this noise. For example, a low-pass filter may be an averaging filter. With such a filter, the value of each pixel is replaced with an average of the pixels located within a window (called a “kernel”) located around this pixel (called the “target pixel” or “central pixel”).
To avoid degradation in detailed areas such as textured areas or contours, which are also represented by high frequencies, the low-pass filter may only be applied when there is significant noise present. In this case, the noise needs to be distinguished from the contours or textures. Conventional solutions propose classifying the image into different parts (with or without details) and applying different filters to each part.
For example, for each pixel in the window, the absolute difference is calculated between the value of this pixel and the value of the target pixel. Then an average of these differences is calculated for the pixels in the window. The low-pass filter is only applied if this average of the differences is below a threshold.
However, a window whose pixels represent a relatively uniform and noisy portion of the image could result in substantially the same averaged value for the differences as a window in which the pixels represent a better quality portion of the image containing fine details (for example, contours or texture).
The threshold is determined during a prior threshold estimation step. For example, the standard deviation between the values of the pixels in the image may be determined.
However, these thresholds may be relatively difficult to determine because they depend on the content of the image. A relatively uniform but noisy image could result in substantially the same threshold value as a better quality image with fine details.
The noise correction filter is therefore likely to be applied to regions of the image that have textures or edges but relatively little noise, with the risk of degrading the quality of these regions of the image and affecting the image resolution through the appearance of blurring. There is also the likelihood that the noise correction filter will not be applied to relatively uniform and noisy regions of the image.