1. Field of the Invention
The invention relates to a method and system for motion estimation and, more particularly, to a method and system for motion estimation using chrominance information.
2. Description of Related Art
In the international video compression standards such as MPEGx H.26x, the inter-frame prediction that applies block matching to motion estimation is widely used to obtain high efficiency in motion picture data coding. FIG. 1 shows a flowchart of a typical inter-frame coding. As shown in FIG. 1, an MPEG system divides a frame into macroblocks (MBs) or sub-macroblocks wherein a macroblock is a 16×16 pixels block and sub-macroblocks can be 8×8 or 4×4 pixels block. For a previous frame (forward or backward) 11 and a current frame 12, when coding, a corresponding motion vector is found for each block 101′ of the previous frame 11 at first. Accordingly, motion estimation of the previous frame 11 can obtain a prediction frame 13. A difference frame 14 is obtained by comparing the prediction frame 13 and the current frame 12. As such, only the motion vectors and the difference frame 14 are required in transmission or storage, and effective compression is obtained. For decompression, the motion vectors and the difference frame 14 are sent to an MPEG decoder, and original blocks of the current frame 12 are restored by adding blocks read from the previous frame 11 based on the motion vectors and blocks of the difference frame 14.
As shown in FIG. 2, the motion estimation is based on each block 101 of the current frame 12 to find a corresponding block 101′ from the previous frame 11, thereby obtaining the motion behavior of the block 101′ and thus determining a corresponding motion vector. Typically, a luminance (i.e., Y component) difference between the blocks 101′ and 101 can represent a similarity degree at a comparison of the blocks 101 and 101′. As shown in FIG. 3, a typical motion estimation with the luminance first computes luminance differences Y1-SAD˜Yn-SAD between the luminance Y of a target block and the luminance Y1′˜Yn′ of candidate blocks corresponding to the target block. The difference computation can use a known summed absolute difference (SAD). The differences Y1-SAD˜Yn-SAD are compared with each other to find a minimum one for accordingly finding a motion vector (MV) of the target block. Generally, such a way can reach to a satisfactory prediction. However, for some areas with low luminance, the motion estimation becomes inaccurate by means of the luminance only. Errors caused by the inaccurate motion estimation further produce error propagation caused by a subsequent processed image of predictive-coded picture, which can reduce a quality of the image frame due to over-valued quantization step-size.
To overcome this, as shown in FIG. 4, a typical method respectively computes luminance differences Y1-SAD˜Yn-SAD between the luminance Y of target block and the luminance Y1′˜Yn′ of candidate blocks and chrominance differences U1-SAD˜Un-SAD, V1-SAD˜Vn-SAD between the chrominance U/V of the target block and the chrominance U1′˜Un′/V1′˜Vn′ of the candidate blocks. Next, the luminance differences and the chrominance differences are summed respectively to obtain luminance and chrominance differences YUV1-SAD˜YUVn-SAD, and accordingly a minimum one among the luminance and chrominance differences YUV1-SAD˜YUVn-SAD can be found to thus find the motion vector. Such a method computes and compares the luminance and chrominance differences without weighting so as to reduce entire coding efficiency.
As shown in FIG. 5, another typical method respectively computes luminance differences Y1-SAD˜Yn-SAD between the luminance Y of target block and the luminance Y1′˜Yn′ of candidate blocks and chrominance differences U1-SAD˜Un-SAD, V1-SAD˜Vn-SAD between the chrominance U/V of the target block and the chrominance U1′˜Un′/V1′˜Vn′ of the candidate blocks. Next, a minimum one among the luminance differences Y1-SAD˜Yn-SAD and minimum ones among the chrominance differences U1-SAD˜Un-SAD, V1-SAD˜Vn-SAD are found respectively to further find the luminance motion vector MV-Y and the chrominance motion vectors MV-U, MV-V. Finally, the minimum one among the motion vectors MV-Y MV-U and MV-V is selected as the motion vector. Such a method needs two to three times of the computation than only the luminance is used for motion estimation, and accordingly the computation load and the amount of bandwidth usage are relatively increased. Therefore, it is desirable to provide an improved method and system for motion estimation to mitigate and/or obviate the aforementioned problems.