As is well known in the art, digital video image processing systems commonly rely on Moving Picture Experts Group (MPEG) standards for compressing and transmitting digital video images. These MPEG standards use discrete cosine transform (DCT) for encoding/compressing video images. DCT encoding produces a stream of coefficients, which are then quantized to produce digitized video image data that can be stored or transmitted. This process of discrete cosine transformation and quantization greatly reduces the amount of information needed to represent each frame of video. In order to reconstruct the original video image (e.g. for display), the encoding process is reversed. That is, the video image data is inverse quantized, and then processed using an inverse discrete cosine transform (IDCT).
However, during reconstruction of the video image, if there is not enough processing power to maintain the frame rate and perform IDCT, then a simplified IDCT is performed. This is typically accomplished by simply ignoring certain portions of the transform altogether, resulting in imperfect replication of the original image. Image quality will suffer, typically by loss of detail, and “blocking” (that is, square areas lacking in detail) within the displayed image.
A known method of reducing the amount of IDCT data that must be processed to reconstruct a video image is to implement a motion compensation technique during the encoding process. For example, consider an area of an image that moves, but does not change in detail. For such an area, much of IDCT data can be replaced by a motion vector. During subsequent reconstruction of the video image, a motion compensation algorithm applies the motion vector to a previous and/or next image to estimate the correct image data for the relevant portion of the display image. Digital image processing systems commonly rely on a host processor (e.g. the main CPU of a personal computer) to perform motion compensation processing when needed.
A difficulty with this technique is that it requires processing of multiple image frames (or fields) during reconstruction of the video image. This operation is computationally intensive and can impose an unacceptable burden on the resources of the host processor. If sufficient processing power is unavailable, various motion-compensation “artifacts” (e.g. choppy movements within the frame) become apparent to a person viewing the reconstructed image.
An alternative solution is to embed a single-purpose motion compensation engine (within the graphics processor unit (GPU)). This approach ensures that all video image processing can be performed independently of the host CPU. However, this solution consumes valuable “real-estate” on the GPU, which increases costs. In view of the limited use of the motion compensation engine, this increased cost is frequently considered to be unacceptable.
Accordingly, a system enabling shared resources of a GPU to perform motion compensation processing during the reconstruction of video images remains highly desirable.