With the development of multimedia applications, it has become common in recent years to handle information of all sorts of media such as audio, video and text in an integrated manner. In doing so, it becomes possible to handle media integrally by digitalizing all the media. However, since digitalized images have an enormous amount of data, information compression techniques are of absolute necessity for their storage and transmission. On the other hand, in order to interoperate compressed image data, standardization of compression techniques is also important. Standards on image compression techniques include H. 261 and H. 263 recommended by ITU-T (International Telecommunication Union Telecommunication Standardization Sector), and MPEG (Moving Picture Experts Group)-1, MPEG-2 and MPEG-4 of ISO (International Organization for Standardization).
Inter picture prediction involving motion compensation is a technique common to these moving picture coding methods. For motion compensation in these moving picture coding methods, each of pictures constituting an inputted moving picture is divided into rectangles (blocks) of a predetermined size, and a predictive image which is to be referred to for coding and decoding is generated based on a motion vector indicating motion of each block between pictures.
A motion vector is estimated for each block or each area that is a division of a block. A previously coded picture which is located forward or backward in display order of a current picture to be coded is to be a reference picture (hereinafter referred to as a forward reference picture or a backward reference picture). In motion estimation, a block (an area) in a reference picture for predicting a current block to be coded most appropriately is selected from the blocks in the reference picture, and the relative location of the selected block to the current block is to be the best motion vector. At the same time, a prediction mode, that is, the information specifying a prediction method for making the most appropriate prediction using pictures which can be referred to, is determined.
One of such prediction modes is direct mode, for example, in which inter picture prediction coding is performed with reference to temporally forward and backward pictures in display order (See, for example, ISO/IEC MPEG and ITU-T VCEG Working Draft Number 2, Revision 2 2002-03-15 P. 64 7.4.2 Motion vectors in direct mode). In direct mode, a motion vector is not coded explicitly as data to be coded, but derived from a previously coded motion vector. To be more specific, a motion vector of a current block in a current picture to be coded is calculated with reference to a motion vector of a block (reference block) which is located at the same coordinate (spatial position) in a previously coded picture in the neighborhood of the current picture as that of the current block in the current picture. Then, a predictive image (motion compensation data) is generated based on this calculated motion vector. Note that when decoding, a motion vector is derived in direct mode based on a previously decoded motion vector in the same manner.
Calculation of a motion vector in direct mode will be explained below more specifically. FIG. 1 is an illustration of motion vectors in direct mode. In FIG. 1, a picture 1200, a picture 1201, a picture 1202 and a picture 1203 are located in display order. The picture 1202 is a current picture to be coded, and a block MB1 is a current block to be coded. FIG. 1 shows the case where multiple inter picture prediction is performed for the block MB1 in the picture 1202 using the pictures 1200 and 1203 as reference pictures. In order to simplify the following explanation, it is assumed that the picture 1203 is located backward of the picture 1202 and the picture 1200 is located forward of the picture 1202, but these pictures 1200 and 1203 do not always need to be located in this order.
The picture 1203 that is a backward reference picture for the picture 1202 has a motion vector which refers to the forward picture 1200. So, motion vectors of the current block MB1 are determined using a motion vector MV1 of a reference block MB2 in the picture 1202 located backward of the current picture 1202. Two motion vectors MVf and MVb are calculated byMVf=MV1×TRf/TR1  Equation 1(a)MVb=MV1×TRb/TR1  Equation 1(b)where MVf is the forward motion vector of the current block MB1, MVb is the backward motion vector of the current block MB1, TR1 is the difference in time information between the picture 1200 and the picture 1203 (difference in time information between the picture having the motion vector MV1 and the reference picture pointed by MV1), TRf is the difference in time information between the picture 1200 and the picture 1202 (difference in time information between the picture having the motion vector MVf and the reference picture pointed by MVf), and TRb is the difference in time information between the picture 1202 and the picture 1203 (difference in time information between the picture having the motion vector MVb and the reference picture pointed by MVb). Note that TR1, TRf and TRb are not limited to a difference in time information between pictures, but may be index data (data included in a stream explicitly or implicitly or data associated with a stream) indicating a temporal distance between pictures in display order so as to be used for scaling motion vectors, such as data obtained using a difference in picture numbers assigned to respective pictures, data obtained using a difference in picture display order (or information indicating picture display order) and data obtained using the number of pictures between pictures.
Next, a flow of processing for deriving motion vectors will be explained. FIG. 2 is a flowchart showing a flow of processing for deriving motion vectors. First, information on the motion vector of the reference block MB2 is obtained (Step S1301). In the example as shown in FIG. 1, information on the motion vector MV1 is obtained. Next, parameters for deriving motion vectors of the current block MB1 are obtained (Step S1302). The parameters for deriving the motion vectors of the current block MB1 are scaling coefficient data used for scaling the motion vector obtained in Step S1301. More specifically, the parameters correspond to TR1, TRf and TRb in Equation 1(a) and Equation 1(b). Next, the motion vector obtained in Step S1301 is scaled by multiplication and division in Equation 1(a) and Equation 1(b) using these parameters so as to derive the motion vectors MVf and MVb of the current block MB1 (Step S1303).
As shown in abovementioned Equation 1(a) and Equation 1(b), division is required for deriving motion vectors. However, as the first problem, division takes more time for calculation than calculation such as addition and multiplication. It is not preferable for a device such as a mobile phone requiring lower power consumption because a calculator with lower capability is used in such a device to meet a requirement for lower power consumption.
Under these circumstances, it is conceived to derive motion vectors by multiplication with reference to multiplier parameters corresponding to divisors in order to avoid division. This allows calculation by multiplication with a smaller amount of calculation instead of division, and thus processing for scaling can be simplified.
However, as the second problem, since various values are applied to parameters for deriving motion vectors depending on distances between reference pictures and a picture including a current block, the parameters can have a wide range of values. An enormous number of parameters must be prepared for multiplier parameters corresponding to all the divisors, and thus large memory capacity is required.
So, in order to solve these first and second problems, the object of the present invention is to provide a motion vector derivation method, a moving picture coding method and a moving picture decoding method for deriving motion vectors with a smaller amount of calculation.