A “motion-compensated interframe prediction” is one of important elements of a conventional moving image compression encoding/decoding technology. The motion-compensated interframe prediction is a method for detecting how pictures in continuous frames (screens) composing a moving image move, in order to compress the moving image effectively. In a typical moving image compression method such as MPEG (Moving Picture Experts Group) and the like, when the motion-compensated interframe prediction is performed, the following “motion vector” is used. The motion vector indicates, per unit called a “macroblock” made by dividing a picture, to which direction a current picture moves and an amount of the movement of the current picture compared to a macroblock in another picture preceding or succeeding the current picture in a display order. Note that the compared picture is called a “reference picture”.
In recent years, MPEG 4-AVC (Advanced Video Coding) has been standardized as a new moving image compression encoding/decoding technology. In MPEG 4-AVC, a “bidirectional motion-compensated interframe prediction” method as an improved motion-compensated interframe prediction is used (refer to a nonpatent document 1).
Here, the bidirectional motion-compensated interframe prediction will be briefly described.
The bidirectional motion-compensated interframe prediction is a method for selecting two arbitrary reference pictures from other pictures preceding or succeeding a current picture in a display order, and performing the motion-compensated interframe prediction using the two reference pictures. Note that a picture, on which the bidirectional motion-compensated interframe prediction is to be performed, is called a “Bi-predictive picture”, or a “B picture” for short.
When a B picture is encoded, an encoding mode called a “direct mode” may be used. The direct mode calculates a motion vector of a current macroblock using a motion vector of a reference macroblock that has already been decoded, without encoding the motion vector of the current macroblock. Specifically, if the current macroblock is encoded by the direct mode, the motion vector of the current macroblock is calculated using the motion vector of the reference macroblock on same orthogonal coordinates as the current macroblock in a preceding picture. A conventional process of decoding a macroblock encoded by the direct mode will be briefly described with reference to FIG. 7.
In MPEG, for later decoding of macroblocks encoded by the direct mode, when sequentially decoding macroblocks in a picture, a predetermined amount of motion vectors of decoded macroblocks are stored. For example, according to one specification of MPEG4-AVC, motion vectors of four pictures are stored. In this case, if one picture is composed of 8,160 macroblocks, motion vectors of 32,640 macroblocks in total are required to be stored. A normal buffer is neither sufficient nor practical for storing so many motion vectors in a decoding circuit. Therefore, motion vectors of decoded macroblocks are stored in a memory located outside of the decoding circuit, such as a DRAM (Dynamic Random Access Memory).
As shown in FIG. 7, when a macroblock of a B picture is decoded, firstly, macroblock type information included in the head of the macroblock is referenced to judge whether the macroblock is encoded by the direct mode (step S200).
If the macroblock is encoded by the direct mode, a DMA (Direct Memory Access) transfer instruction is performed to obtain a motion vector of a reference macroblock from an external memory (step S201). A motion vector of the macroblock is specified based on the motion vector of the reference macroblock transferred from the external memory as a result of the DMA transfer instruction (step S202).    Nonpatent Document 1: ITU-T Recommendation H.264 “Advanced Video Coding for Generic Audiovisual Services”