1. Field of the Invention
The present invention relates to a digital image processor. More particularly, the present invention relates to a moving image coder for coding image data with high efficiency.
2. Description of the Related Art
In a general coding method, a digital moving image is divided into small blocks and a prediction mode is selected every image block. After an orthogonal transform of the moving image is performed with respect to a prediction error, the moving image is adaptively quantized. For example, this coding method is proposed in a coding system between MPEG2 frames by Hiroshi Watanabe in ITEJ Technical Report Vol. 16, No. 61, p37-42 ICS '92-73(October 1992).
As shown in FIG. 11, a moving image can be considered as a series of frames continuous in a time direction. Since these frames are continuous in the time direction, an arbitrary frame has a high correlation with respect to a frame adjacent to this arbitrary frame. Image data can be effectively compressed by using prediction coding between the frames. Here, an I-frame is an intra frame for coding the moving image by using only information of the present frame.
A P-frame is a frame predicted in a forward direction. The I-frame or the P-frame is used as a past frame. In this case, the forward prediction shows a frame prediction mode for making a prediction from the past frame. A B-frame is a frame capable of selecting one prediction from three kinds of predictions composed of the forward prediction, a backward prediction and a bidirectional prediction. Here, the backward prediction shows a frame prediction mode for making a prediction from a future frame. The bidirectional prediction shows a frame prediction mode using interpolation from both the past and future.
The I-frame or the P-frame is used as the past or future frame. In a method for selecting the I or P frame, the I or P frame is selected such that the I or P frame has a minimum error amount among an error amount of the forward prediction, an error amount of the backward prediction and an error amount of the bidirectional prediction. In this case, the prediction error is set to an error in prediction with respect to an interpolated image made by an average of two frames of the past and future frames or made by interpolation.
FIG. 9 is a block diagram showing one constructional example of a general coder. In FIG. 9, a frame memory 901 stores an input image. The frame memory 901 can store a plurality of frames to calculate a motion vector. An orthogonal transform section 902 converts a divisional image divided into image blocks to data suitable for coding by a two-dimensional orthogonal transform every image block. A quantizer 903 quantizes the converted data in accordance with a suitable quantizing step size. A variable length coding section 904 codes a quantized value at a variable length in accordance with a predetermined code table and outputs the coded value as a transmission line code.
A buffer 905 accumulates and smooths data from the variable length coding section 904 to output these data at a constant rate. An inverse quantizer 906 inversely quantizes an output of the quantizer 903. An inverse orthogonal transform section 907 performs an inverse orthogonal transform with respect to an output of the inverse quantizer 906. Each of frame memories 908 and 909 stores an image required for each of forward and backward predictions.
A motion compensation section 910 makes a motion compensation prediction in a forward direction, a backward direction or a bidirection by using a motion vector as an output of a motion vector detection/prediction mode judging section 911 described later and a selected prediction mode. A motion vector detection/prediction mode judging section 911 detects an image movement from the image stored to the frame memory 901 and selects an optimum prediction mode from three kinds of predictions composed of the forward directional, backward directional and bidirectional predictions with respect to each of the image blocks. An intra/inter judging section 912 determines whether each of the image blocks is coded within a frame or is coded between frames.
The image of the frame memory 901 is divided into image blocks, and a difference between a divisional image and a predicted image from the motion compensation section 910 is calculated. A calculated differential value is orthogonally transformed by the orthogonal transform section 902 and is quantized by the quantizer 903. The quantized value is further coded by the variable length coding section 904 and is output to the coding buffer 905. An output of the quantizer 903 is added to the predicted image from the motion compensation section 910 through the inverse quantizer 906 and the inverse orthogonal transform section 907, and is stored to each of the frame memories 908 and 909. An image from each of the frame memories 908 and 909 is used as data of a subsequent predicted image in accordance with outputs of the motion vector detection/prediction mode judging section 911 and the intra/inter judging section 912.
FIG. 10 shows the motion vector detection/prediction mode judging section 911 in detail. A forward predicting motion vector/prediction error amount detecting section 001 calculates prediction error amounts of past and present frames in the forward prediction, and finds a vector providing a minimum prediction error amount. A backward predicting motion vector/prediction error amount detecting section 002 calculates prediction error amounts of future and present frames in the backward prediction, and finds a vector providing a minimum prediction error amount. A bidirectional prediction error amount detecting section 003 makes an interpolated image predicted in both directions by using the forward and backward directional motion vectors calculated by the detecting sections 001 and 002. The bidirectional prediction error amount detecting section 003 then calculates a prediction error amount with respect to the present frame. A comparator/mode selecting section 004 compares the error amounts of the forward prediction, the backward prediction and the bidirectional prediction with each other and calculates a minimum error amount value. The comparator/mode selecting section 004 then selects each of a forward prediction mode, a backward prediction mode and a bidirectional prediction mode.
Here, the motion vector shows a motion amount provided when the prediction error amount is minimized by matching the present and predicted frames every image block within a search region predetermined with respect to each of the image blocks. For example, when a distance between the present and predicted frames is set to n, the search region can be formed by .+-.16n in horizontal and vertical directions. In the following description, B(i,i) shows a present block and PB (i, j) shows a predicted block. Here, i and j respectively show block positions in the horizontal and vertical directions. (mx,my) shows a motion vector. In this case, a prediction error PE is represented by the following formula (1). ##EQU1##
In this formula, xsize and ysize respectively show block sizes in the horizontal and vertical directions.
Instead of this formula, the following formula (2) may be used. ##EQU2##
Thus, moving image data are compressed by using a motion vector having a best compression efficiency.
In the general motion vector detection/prediction mode judging section shown in FIG. 10, it is necessary to calculate the forward prediction error amount with respect to the P-frame and calculate the forward prediction error amount, the backward prediction error amount and the bidirectional prediction error amount with respect to the B-frame. The forward prediction error amount of the P-frame and the forward prediction error amount and the backward prediction error amount of the B-frame are already calculated by the motion vector prediction error amount detecting sections 001 and 002 when an optimum motion vector is calculated. However, the bidirectional prediction error amount must be calculated by separately arranging the bidirectional prediction error amount detecting section 003 so that new hardware is required for this calculation.
Namely, the bidirectional prediction error amount detecting section 003 makes a forward directional predicted image block from a forward predicting vector and makes a backward directional predicted image block from a backward predicting vector. The bidirectional prediction error amount detecting section 003 then synthesizes a bidirectional predicted image block by averaging or interpolating both the forward directional predicted image block and the backward directional predicted image block. Further, it is necessary to calculate a prediction error amount between this bidirectional predicted image block and an original image block, and set this calculated prediction error amount to a bidirectional prediction error amount. The bidirectional prediction error amount detecting section 003 makes a calculation by using results of the motion vector/prediction error amount detecting sections 001 and 002. Therefore, there is a restriction in which no calculation of the bidirectional prediction error amount detecting section 003 can be made in parallel with calculations of the motion vector/prediction error amount detecting sections 001 and 002.