Noise reduction filters for digital images and video are known. Several types of noise reduction filters exist. For example, spatial filters are used to remove noise spatially distributed within a given digital image or a given video frame. Simple spatial filtering algorithms may divide images into blocks of pixels and use block averaging to remove noise.
In digital video, temporal filtering is also used to remove noise in video frames. Temporal filtering exploits the high degree of correlation between pixels of successive frames in a video sequence. For example, filtering two or more corresponding pixels from multiple successive frames removes temporal noise.
Simple temporal filtering techniques include frame averaging in which a pixel in a current frame may be replaced by an average value of current and previous pixel values at the pixel location. Other schemes may use variable weights or filter coefficients to control the relative contribution of pixels from the current frame and pixels from previous frames. Weighted averaging of pixels from previous and current frames is also known as alpha-blending. More sophisticated techniques apply decision filters on every pixel position to non-linearly alpha-blend pixels in current and previous frames to improve image resolution.
Known noise reduction filters typically include a feedback loop to obtain pixel values of previous frames or pixel locations, so that they can be averaged, alpha-blended or otherwise combined with current pixels.
Unfortunately, there are drawbacks associated with known temporal reduction filters. While simple temporal noise reduction filters can effectively remove temporal noise, they may also lead to motion and static artifacts.
Motion artifacts are caused by combining or averaging pixels from different objects. When objects are in motion, pixel locations associated with an object in a previous frame, may be associated with another object in the current frame. Combining corresponding locations from a previous frame and a current frame may thus lead to pixels of different objects being blended. As an object moves its position across multiple frames, the blending process may create a ghostly contour that follows the object's motion. This in turn typically results in motion blur or a motion artifact.
On the other hand, even when there is no motion, static artifacts can result from combining pixels from two different but relatively static scenes. When a scene change boundary in a video sequence, there is little or no correlation of the first frame from the current scene, with the last frame from the previous scene. Averaging uncorrelated frames typically leads to slow transitions, leading to artifacts that resemble motion blur. In particular, when the new scene contains a large area of dark pixels that remain static for some time, the alpha-blending process could cause a faded image from the previous scene to remain visible for a long period of time, causing a noticeable static artifact. Such static artifacts are particularly noticeable in dark regions of a frame.
Previously known attempts to counter such drawbacks include precluding alpha-blending in dark areas of frames. Other known methods include reducing the noise threshold in the non-linear filtering process. Unfortunately both methods tend to achieve a smaller probability of motion or static artifacts at the expense of reduced effectiveness in noise removal.
Similarly, spatial filters that contain negative coefficients in their convolution kernels can cause overshoot and/or undershoot at edge boundaries, leading to false contours along strong edges commonly known as ringing artifacts. In the past, a typical solution involved choosing filters with a cutoff frequency very close to the Nyquist frequency. However, this approach may not work well with video inputs that contain mixed-in graphics and other overlays. Better scalers are needed to maintain video image sharpness and at the same time preserve high quality graphics and overlays.
Accordingly, there is a need for an effective protection filter for use with digital images and video, to reduce static and motion artifacts that may result from: temporal alpha-blending of pixels, convolution kernels with negative coefficients, and the like.