1. Field of the Invention
This invention relates to systems for decoding video images and particularly to methods for improving decoded video image quality by removing coding artifacts and noise.
2. Description of Related Art
"Coding artifacts" are visible degradations in image quality that may appear as a result of encoding and then decoding a video image using a video compression method such as employed for the MPEG-1, MPEG-2, H.261, or H.263 standard. For example, video encoding for each of the MPEG-1, MPEG-2, H.261, and H.263 standards employs some combination of: partitioning frames of a video image into blocks; determining motion vectors for motion compensation of the blocks;
performing a frequency transform (e.g., a discrete cosine transform) on each block or motion-difference block; and quantizing the resultant transform coefficients. Upon decoding, common coding artifacts in a video image include blockiness that results from discontinuity of block-based motion compensation and inverse frequency transforms at block boundaries and "mosquito" noise surrounding objects in the video image as a result of quantization errors changing transform coefficients. Sources other than encoding and decoding can also introduce noise that degrades image quality. For example, transmission errors or noise in the system recording a video image can create random noise in the video image.
Postfiltering of a video image processes the video image to improve image quality by removing coding artifacts and noise. For example, spatial postfiltering can smooth the discontinuity at block boundaries and reduce the prominence of noise. Such spatial filtering operates on an array of pixel values representing a frame in the video image and modifies at least some pixel values based on neighboring pixel values. Spatial filtering can be applied uniformly or selectively to specific regions in a frame. For example, selective spatial filtering at a block edge (known locations within a frame) smoothes image contrast to reduce blockiness. However, spatial filtering can undesirably make edges and textures of objects in the image look fuzzy or indistinct and selective spatial filtering can cause "flashing" where the clarity of the edges of an object change as the object moves through areas filtered differently.
Temporal filtering operates on a current array of pixel values representing a current frame and combines pixel values from the current array with pixel values from one of more arrays representing prior or subsequent frames. Typically, temporal filtering combines a pixel value in the current array with pixel values in the same relative position in an array representing a prior frame under the assumption that the area remains visually similar. If noise or a coding artifact affects a pixel value in the current array but not the related pixel values in the prior frames, temporal encoding reduces the prominence of the noise or coding artifacts. A problem with temporal encoding arises from motion in the video image where the content of the image in one frame shifts in the next frame so that temporal filtering combines pixels in the current frame with visually dissimilar pixels in prior frames. When this occurs, the contribution of the dissimilar pixels creates a ghost of a prior frame in the current frame. Accordingly, temporal filtering can introduce undesired artifacts in a video image.
Postfiltering processes are sought that better remove coding artifacts and noise while preserving image features and not introducing further degradations.