Conventional motion compensated prediction methods for performing highly efficient encodation of a picture are described below.
The first example of a conventional motion compensated prediction method which will be discussed is a motion compensated prediction method using block matching that compensates for translational motion of an object. For example, in ISO/IEC 11172-2 (also known as the MPEG 1 video standard), a forward/backward/interpolative motion compensated prediction method using block matching is described. The second example of a conventional motion compensated prediction method which will be discussed is a motion compensated prediction method using an affine motion model. For example, “Motion Compensated Prediction Using An Affine Motion Model” (the technical report of IE94-36, by the Institute of Electronics, Information and Communication Engineers of Japan) describes the motion compensated prediction method in which the displacement of an object in each arbitrarily shaped segment is modeled and expressed using affine motion parameters, and in which the affine motion parameters are detected so as to perform motion compensated prediction.
Now, the conventional motion compensation method using block matching by a translational motion and the conventional motion compensation method using the affine motion model will be described in more detail below.
FIG. 42 shows a known motion compensated prediction which utilizes block matching. In FIG. 42, i represents a position of a block on a display as a unit used for motion compensated prediction; fi(x, y, t) represents the pel value (x, y) in the block i at time t on the display; R represents a motion vector search range; and v represents a motion vector (εR). Block matching is a process for detecting, within the search range R of a reference picture 201, a block whose pel value is most approximate to the pel value fi(x, y, t) of the block i in an input picture 202, or for detecting a pel value fi+v (x, y, t−1) which will minimize a prediction error power Dv which may be expressed in one of the following equations (1).
                                          D            v                    =                                    ∑                              x                ,                y                                      ⁢                                          {                                                                            f                                              i                        +                        v                                                              ⁡                                          (                                              x                        ,                        y                        ,                                                  t                          -                          1                                                                    )                                                        -                                                            f                      i                                        ⁡                                          (                                              x                        ,                        y                        ,                        t                                            )                                                                      }                            2                                      ⁢                                  ⁢        or        ⁢                                  ⁢                              ∑                          x              ,              y                                ⁢                                                                                  f                                      i                    +                    v                                                  ⁡                                  (                                      x                    ,                    y                    ,                                          t                      -                      1                                                        )                                            -                                                f                  i                                ⁡                                  (                                      x                    ,                    y                    ,                    t                                    )                                                                                                    (        1        )            
The value v which minimizes Dv will be the motion vector. In FIG. 42, a block matching search method using real sample point integer pels in a reference picture is referred to as an integer pel precision search, and a block matching search method using half-pels (interposed midway between the integer pels) in addition to integer pels is referred to as a half-pel precision search. Generally, under the same block matching search range, more search pel points can be obtained in the half-pel precision search than in the integer pel precision search. Consequently, increased prediction accuracy will be obtained with the half-pel precision search.
FIG. 43 is a block diagram showing a configuration of a motion compensated predictor (also referred to as a block matching section) using a motion compensated prediction method in accordance with, for example, the MPEGI video standard.
In the figure, reference numeral 207 is a horizontal displacement counter, 208 is a vertical displacement counter, 211 is a memory readout-address generator, 213 is a pattern matching unit, and reference numeral 216 is a minimum prediction error power determinator. Reference numeral 203 is a horizontal displacement search range indication signal, 204 is a vertical displacement search range indication signal, 205 is input picture block data, 206 is an input picture block position indication signal, 209 is horizontal displacement search point data, 210 is vertical displacement search point data, 212 is a readout address, 214 is readout picture data, 215 is a prediction error power signal, 217 is a motion vector, 218 is a minimum prediction error power signal, and 219 is a frame memory for storing reference picture data.
FIG. 44 is a flow chart showing the operations of the conventional motion compensated predictor having the above-mentioned configuration of FIG. 43.
In FIG. 44, dx represents a horizontal displacement search pel point;
dy represents a vertical displacement search pel point;
range_h_min represents a lower limit in a horizontal displacement search range;
range_h_max represents an upper limit in the horizontal displacement search range;
range_v_min represents a lower limit in a vertical displacement search range;
range_v_max represents an upper limit in the vertical displacement search range;
D_min represents the minimum prediction error power;
(x, y) are coordinates representing the position of a pel in a macroblock;
D(dx, dy) represents prediction error power produced when dx and dy are searched;
f(x, y) is the value of a pel (x, y) in an input picture macroblock;
fr(x, y) is the value of a pel (x, y) in a reference picture;
D(x, y) is a prediction error for the pel (x, y) when dx and dy are searched;
MV_h is a horizontal component of a motion vector (indicating horizontal displacement); and
MV_v is a vertical component of a motion vector (indicating vertical displacement).
The block matching operation will be described in more detail, by referring to FIGS. 43 and 44.
1) Motion Vector Search Range Setting
Range_h_min and range_h_max are set through the horizontal displacement counter 207 according to the horizontal displacement search range indication signal 203. Range_v_min and range_v_max are set through the vertical displacement counter 208 according to the vertical displacement search range indication signal 204. In addition, the initial values of dx for the horizontal displacement counter 207 and dy for the vertical displacement counter 208 are set to range_h_min and range_v_min, respectively. In the minimum prediction error power determinator 216, the minimum prediction error power D_min is set to a maximum integer value MAXINT (for example, OxFFFFFFFF). These operations correspond to step S201 in FIG. 44.
2) Possible Prediction Picture Readout Operation
Data on the pel (x+dx, y+dy) in a reference picture, which are distant from the pel (x, y) in the input picture macroblock by dx and dy are fetched from the frame memory. The memory readout address generator 211 illustrated in FIG. 43 receives the value of dx from the horizontal displacement counter 207 and the value of dy from the vertical displacement counter 208, and generates the address for the pel (x+dx, y+dy) in the frame memory.
3) Prediction Error Power Calculation
First, the prediction error power D(dx, dy) for the motion vector representing (dx, dy) is initialized to zero. This corresponds to step S202 in FIG. 44. The absolute value for the difference between the pel value readout in 2) and the value of the pel (x, y) in the input picture macroblock is accumulated into D(dx, dy). This operation is repeated until the value of x and the value of y become x=y=16. Then, the prediction error power D(dx, dy) produced when (dx, dy) is searched, or Dv given by numeral equations (1) is obtained. This operation is executed by the pattern matching unit 213 illustrated in FIG. 43. Then, the pattern matching unit 213 supplies D(dx, dy) to the minimum prediction error power determinator 216 through the prediction error power signal 215. These operations correspond to steps S203 through S209 in FIG. 44.
4) Minimum Prediction Error Power Updating
It is then determined whether the resultant D(dx, dy) obtained in 3) has given the minimum prediction error power among the searched results which have been obtained so far. This determination is made by the minimum prediction error power determinator 216 illustrated in FIG. 43. This corresponds to step S210 in FIG. 44. The minimum prediction error power determinator 216 compares the value of the minimum prediction error power D_min therein with D(dx, dy) supplied through the prediction error power signal 215. If D(dx, dy) is smaller than D_min, the minimum prediction error power determinator 216 updates the value of D_min to D(dx, dy). In addition, the minimum prediction error power determinator 216 retains the values of dx and dy at that time as the possible motion vector (MV_h, MV_v). This updating operation corresponds to step S211 in FIG. 44.
5) Motion Vector Value Determination
The above-mentioned 2) through 4) operations are repeated for all (dx, dy) within the motion vector search range R. (These operations correspond to steps S212 through S215 in FIG. 44.) The final values (MV_h, MV_v) retained by the minimum prediction error power determinator 216 are output as the motion vector 217.
FIG. 45 schematically shows a motion compensated prediction system in accordance with the MPEG1 video standard.
Under the MPEG1 video standard, a motion picture frame is typically referred to as a picture. One picture is divided into macroblocks, each of which includes 16×16 pels (color difference signal includes 8×8 pels). For each macroblock, motion compensated prediction using block matching is performed. The resultant motion vector value and a prediction error are then encoded.
Under the MPEG1 video standard, different motion compensation methods can be applied to different individual pictures. Referring to the figure, I-pictures are encoded, without being subjected to motion compensated prediction and without reference to other pictures. P-pictures are encoded using forward motion compensated prediction from a past encoded picture. B-pictures may be encoded using forward motion compensated prediction from a future picture to be encoded, backward motion compensated prediction from a past picture, and interpolative prediction from the mean value between the past encoded picture and the future picture to be encoded. However, forward/backward/interpolative motion compensated predictions are basically all motion compensated prediction using block matching which utilize different reference pictures for implementing the prediction.
As described above, block matching has been established as a main method for implementing motion compensated prediction for current video encoding systems. Block matching, however, is an operation which determines the translational displacement of an object for each square block such as a macroblock. Block matching is based on the assumption that “a picture portion having the same luminance belongs to the same object”. Consequently, in principle, it is impossible to detect motions of an object other than square-block-based motions. For portions in which the object does not move according to a simple translational motion such as rotation, scaling up and down, zooming, or three-dimensional motion prediction accuracy will be reduced.
In order to solve the above-mentioned motion detecting problems that are associated with the conventional block matching method described above, motion compensated prediction using the affine motion model have been proposed and aim at more accurately detecting the displacement of an object including rotation and scaling of the object as well as translational motion. This known solution is based on the assumption that (x, y), the value of a pel in a picture segment to be predicted is converted to a reference picture pel value (x′, y′) using the affine motion model as shown in the following equation (2). Under this assumption, all the affine parameters are searched and detected as affine motion parameters. Motion compensated prediction performed on each arbitrarily shaped prediction picture segment after the detection of affine motion parameters is proposed and described in “Motion Compensated Prediction Using An Affine Motion Model” (a technical report of IE94-36 by the Institute of Electronics, Information and Communication Engineers of Japan).
                              (                                                                      x                  ′                                                                                                      y                  ′                                                              )                =                                            (                                                                                          cos                      ⁢                                                                                          ⁢                      θ                                                                                                  sin                      ⁢                                                                                          ⁢                      θ                                                                                                                                                          -                        sin                                            ⁢                                                                                          ⁢                      θ                                                                                                  cos                      ⁢                                                                                          ⁢                      θ                                                                                  )                        ⁢                          (                                                                                          C                      x                                                                            0                                                                                        0                                                                              C                      y                                                                                  )                        ⁢                          (                                                                    x                                                                                        y                                                              )                                +                      (                                                                                t                    x                                                                                                                    t                    y                                                                        )                                              (        2        )            
The definition of θ, (Cx, Cy), (tx, ty) will be described later.
FIG. 46 shows a concept of the motion compensated prediction process using the affine motion model.
In the figure,
i represents the position of a segment on a display used as a unit for motion compensated prediction;
fi(x, y, t) represents a pel (x, y) in the segment position i and at time t;
Rv represents a translational displacement search range;
Rrot, scale represents a search range for a rotated angle/scaled amount;
v represents translational motion vector including translational motion parameters (=(tx, ty));
rot is a rotation parameter (=a rotated angle θ); and
scale is scaled amount parameters (=(Cx, Cy)).
In the motion compensated prediction using the affine model, five affine motion parameters including the rotated angle θ, scaled amount parameters (Cx, Cy) as well as the translational motion parameters (tx, ty) representing the motion vector must be detected. The optimum affine motion parameters can be calculated by searching through all parameters. In order to find the optimum affine motion parameters, however, the number of the arithmetic operations required is enormous. Thus, based on an assumption that the translational displacement is predominant, two stages of affine motion parameter search algorithms are used. In the first stage, the translational displacement parameters (tx, ty) are searched. Then, in the second stage, the rotated angle θ and the scaled amount parameters (Cx, Cy) are searched around the area which the translational displacement parameters (tx, ty) determined in the first stage represents. Further, minute adjustment for the translational displacement is implemented. Among the possible parameters, a combination of affine motion parameters representing the segment which has produced the minimum prediction error power is determined to be the combination of parameters for a prediction picture segment. Then, a difference between the prediction picture segment and the current picture segment is calculated and regarded as a prediction error. And the prediction error is encoded. The prediction error power in accordance with motion compensated prediction using the affine motion model is given by the following equation (3):
                              D                      v            ,            rot            ,            scale                          =                ⁢                              ∑                          x              ,              y                                ⁢                                    {                                                                    M                    rot                                    ·                                      M                    scale                                    ·                                                            f                                              i                        +                        v                                                              ⁡                                          (                                              x                        ,                        y                        ,                                                  t                          -                          1                                                                    )                                                                      -                                                      f                    i                                    ⁡                                      (                                          x                      ,                      y                      ,                      t                                        )                                                              }                        2                                              (        3        )                                or        ⁢                                  ⁢                                  =                              ∑                          x              ,              y                                ⁢                                                                                  M                  rot                                ·                                  M                  scale                                ·                                                      f                                          i                      +                      v                                                        ⁡                                      (                                          x                      ,                      y                      ,                                              t                        -                        1                                                              )                                                              -                                                f                  i                                ⁡                                  (                                      x                    ,                    y                    ,                    t                                    )                                                                                                                                    where        ,                                                                                  M            rot                    =                      (                                                                                cos                    ⁢                                                                                  ⁢                    θ                                                                                        sin                    ⁢                                                                                  ⁢                    θ                                                                                                                                          -                      sin                                        ⁢                                                                                  ⁢                    θ                                                                                        cos                    ⁢                                                                                  ⁢                    θ                                                                        )                          ⁢                                  ⁢                              M            scale                    =                      (                                                                                C                    x                                                                    0                                                                              0                                                                      C                    y                                                                        )                                                          
FIG. 47 shows an example of a configuration of a conventional motion compensated predictor for performing motion compensated prediction using the affine motion model.
In the figure, reference numeral 220 is a translational displacement minute adjusted amount search range indication signal, reference numeral 221 is a rotated angle search range indication signal, 222 is a scaled amount search range indication signal, 223 is a translational displacement search range indication signal, 224 is a signal indicating a position of an input picture segment on a display, and 225 is input picture segment data. Furthermore, reference numeral 226 is a horizontal displacement counter, 227 is a vertical displacement counter, 228 is a translational displacement adder, 229 is a first minimum prediction error power determinator, 230 is a memory readout address generator, 231 is an interpolator, 232 is a half-pel interpolator, 233 is a rotated angle counter, 234 is a scaled amount counter, 235 is a translational displacement/rotated angle/scaled amount adder, 236 is a second minimum prediction error power determinator, 237 is a translational displacement minute adjusted amount counter, 238 is a translational displacement minute adjusted amount adder, and 239 is a final minimum prediction error power determinator.
FIG. 48 is a flow chart showing the conventional operations of the above-mentioned motion compensated predictor. FIG. 49 is a flow chart showing the details of the affine motion parameters detection step illustrated at S224 of FIG. 48.
In these flow charts,
MV_h[4] represents horizontal motion vector components (four possible components);
MV_v[4] represents vertical motion vector components (four possible components);
D_min represents the minimum prediction error power;
θ represents a rotated angle [radian];
Cx and Cy represent scaled amount parameters; and
tx and ty are motion vector minute adjusted amount parameters.
Furthermore, D(θ[i], Cx[i], Cy[i], tx[i], ty[i]) represent the minimum prediction error power obtained after the detection of the affine motion parameters when MV_h[i] and MV_v[i] have been selected;
dθ represents a rotated angle search pel point;
dCx represents a horizontal scaled amount search pel point;
dCy represents a vertical scaled amount search pel point;
dtx represents a horizontal displacement minute adjusted amount search pel point;
dty represents a vertical displacement minute adjusted amount search pel point;
range_radian_min represents a lower limit within a rotated angle search range;
range_radian_max represents an upper limit within the rotated angle search range;
range_scale_min represents a lower limit within a scaled amount search range;
range_scale_max represents an upper limit within the scaled amounts search range;
range_t_h_min represents a lower limit within a horizontal displacement minute adjusted amount search range;
range_t_h_max represents an upper limit within the horizontal displacement minute adjusted amount search range;
range_t_v min represents a lower limit within a vertical displacement minute adjusted amount search range;
range_t_v_max represents an upper limit within the vertical displacement minute adjusted amount search range;
D_min represents the minimum prediction error power;
(x, y) represents a position of a pel in an input picture segment to be predicted;
f(x, y) represents the value of the pel (x, y) in the input picture to be predicted;
fr(x, y) represents the value of a pel (x, y) in a reference picture;
ax represents a value representing horizontal displacement obtained by using the affine motion model;
ay represents a value representing vertical displacement obtained by using the affine motion model;
D(ax, ay) represents a prediction error power produced when ax and ay are searched; and
D(x, y) is a prediction error for the pel (x, y) when ax and ay are searched.
Referring to FIG. 47 through FIG. 49, an operation of the conventional motion compensated prediction process using the affine motion model will be described in more detail.
It is assumed in these figures that like elements or like steps which are given like reference numerals and signs represent the same elements or represent the same processes.
1) First Stage
In the first stage of the conventional operation, detection of translational motion parameters (=the motion vector) obtained by the process similar to the above-mentioned block matching process is performed within a picture segment search range.
Referring to FIG. 47, the picture segment search range is set through the horizontal displacement counter 226 and the vertical displacement counter 227 by using the translational displacement search range indication signal 223. Then, the search pel points are moved. Through the translational displacement adder 228, the value indicating the position of a pel in an input picture segment is added to the counter values. Then, the added result is supplied to the memory readout address generator 230, and the pel value in a possible prediction picture portion is read out from the frame memory 219. The readout pel value is supplied to the pattern matching unit 213, and an error calculation operation similar to that used in the block matching method is performed. This matched result is supplied to the first minimum prediction error power determinator 229 so as to obtain four possible translational motion parameters representing prediction errors in the reverse order of magnitude. These four possible translational motion parameters are expressed as MV_h[4] (horizontal components) and MV_v[4] (vertical components). The operation of the first minimum prediction error power determinator 229 is similar to that of the minimum prediction error power determinator 216. These process steps correspond to steps S221 and S222 in FIG. 48.
2) Second Stage
2-1) Preparations (Picture Segment Search Range Setting and Initialization of the Minimum Prediction Error Power)
For each MV_h[i]/MV_v[i] (0≦i≦3), the rotated angle/the scaled amount are searched around the minute space conterminous therewith. This operation corresponds to step S224 in FIG. 48 and the detailed process steps thereof are illustrated in FIG. 49, and will be described in conjunction with the operation of the motion compensated predictor shown in FIG. 47.
First, through the rotated angle search range indication signal 221 and the scaled amount search range indication signal 222, the rotated angle search range and the scaled amount search range are set in the rotated angle counter 233 and the scaled amount counter 234, respectively. Through the translational displacement minute adjusted amount search range indication signal 220, the translational displacement search range is also set in the translational displacement minute adjusted amount counter 237. The second minimum prediction error power determinator 236 sets the value of the minimum prediction error power D_min retained therein to MAXINT. These operations correspond to step S229 in FIG. 49.
2-2) Rotated Angle Search
The same operation is repeated for each of MV_h[i]/MV_v[i] (0≦i≦3). Thus, a description about the rotated angle search will be directed to the case of MV_h[0]/MV_v[0] alone, and descriptions about other cases will be omitted. The affine motion models ax and ay expressed in the following equations and obtained by changing the rotated angle θ within the rotated angle search range while keeping the scaled amount parameters Cx and Cy and the translational motion minute adjusted amounts tx and ty unchanged:ax=dCx*cos(dθ)*x+dCy*sin(dθ)*y+MV—h[i]+dtx ay=−dCx*sin(dθ)*x+dCy*cos(dθ)*y+MV—v[i]+dty  (4)
The absolute value for a difference between the pel value fr(ax, ay) in a reference picture segment and the pel value f(x, y) in an input picture segment is determined and accumulated to D(ax, ay).
Referring to FIG. 47, the above-mentioned operation is executed by fixing the counted values of the scaled amount counter 234 and the translational displacement minute adjusted amount counter 237, determining ax and ay given by equations (4) through the translational displacement/rotated angle/scaled amount adder 235 based on the counted value of the rotated angle counter 233, reading out the pels necessary for calculating fr(ax, ay) from the frame memory 219 through the memory readout address generator 230, calculating fr(ax, ay) from these pels through the interpolator 231, and determining the absolute value for the difference between the pel value f(x, y) in the input picture segment and the pel value fr(ax, ay) in the reference picture segment through the pattern matching unit 213. Referring to FIG. 49, these operations correspond to steps S231 through S234.
The above-mentioned operations are performed all around the rotated angle search range. Then, the rotated angle θ which has produced the minimum prediction error within the rotated angle search range is determined through the second-stage minimum prediction error determinator 236.
2-3) Scaled Amount Search
The affine motion models ax and ay given by numeral equation (4) are also obtained by fixing the counted value of the translational displacement minute adjusted amount counter 237 as in the rotated angle search, substituting the rotated angle θ determined in 2-2) into numeral equation (4), and changing the scaled amount parameters Cx and Cy within the scaled amount search range.
The scaled amount parameters Cx and Cy which have minimized D(ax, ay) are obtained by performing the operations similar to those in the rotated angle search. The scaled amount counter 234 counts scaled amount search pel points.
2-4) Translational Displacement Minute Adjusted Amount Search
The affine motion models ax and ay given by numeral equation (4) are also obtained by using the rotated angle θ determined in 2-2) and the scaled amount parameters Cx and Cy determined in 2-3) and changing the value of the translational displacement minute adjusted amounts tx and ty within the translational displacement minute adjusted amount search range.
Then, operations similar to those in the rotated angle search or the scaled amount search are performed. The translational displacement minute adjusted amount counter 237 counts translational displacement minute adjusted amount search pel points. In this case, tx and ty are searched with a half-pel precision. Then, the half-pel values for tx and ty are calculated through the half-pel interpolator 232, if necessary, before the half-pel value data for tx and ty are supplied to the pattern matching unit 213. The half-pel values are calculated as shown in FIG. 50 and as follows in the following equation (5), based on the spatial position relationship between half-pels and integer pels:Î(x,y)=[I(xp,yp)+I(xp+1,yp)+I(xp,yp+1)I(xp+1,yp+1)]/4; x,y:ODD[I(xp,yp)+I(xp+1,yp)]/2; x:ODD, y:EVEN[I(xp,yp)+I(xp,yp+1)]/2; x:EVEN, y:ODD  (5)
in which, both x and y are integers equal to or greater than zero. When x and y are both even numbers, half-pels having such coordinates of x and y will become integer pels.
The process flow of the operations illustrated in FIG. 49 will be completed as described above.
2-5) Final Affine Motion Parameters Determination
A prediction error between a prediction picture segment and an input picture segment is then determined. This prediction error can be obtained by using δ[i], Cx[i], Cy[i], tx[i], and ty[i] given by the above-mentioned affine motion parameters search from 2-2) through 2-4) for all of MV_h[i] and MV_v[i]. The picture segment position i and the set of affine motion parameters therefor which has given the smallest prediction error are regarded as the final search result. These operations correspond to steps S225 through S228 in FIG. 48.
As described above, the affine motion parameters search requires an enormous calculational burden as well as a great many process steps.
FIG. 51 is a diagram showing a method of calculating a non-integer pel value produced when the rotated angle and the scaled amount are searched. In other words, the figure is a diagram showing a method of calculating fr(ax, ay) through the interpolator 231.
In the figure, ∘ represents a real sample point in a picture, while • represents a virtual pel value obtained by performing the above-mentioned calculation method. fr(ax, ay) are represented by Î(x, y) calculated in a reference picture and given by the following equation (6) (in which x=ax, y=ay):Î(x,y)=wx1wy1I(xp,yp)+wx2wy1I(xp+1,yp)+wx1wy2I(xp,yp+1)+wx2wy2I(xp+1,yp+1)wx2=x′−xp wx1=1.0−wx2 wy2=y′−yp wy1=1.0−wy2  (6)
During the affine motion parameters search, pel matching is performed and the segment which has produced the minimum prediction error power is selected. Consequently, each time any of the above-mentioned five affine motion parameters is changed, the possible prediction picture segment should be formed again. In addition, rotation and scaling of an object produces non-integer pel values. Thus, the operations expressed in equation (6) are repeated over and over again during the affine motion parameters search. Consequently, the affine motion parameters search is very tedious and time-consuming.
As another motion compensation method using block matching for applying a simple enlarged or reduced picture, Japanese Unexamined Patent Publication No. HEI6-153185 discloses a motion compensator and an encoder utilizing the above motion compensation method. In this method, a reference picture portion included in a frame memory is reduced or enlarged by a thin-out circuit or interpolator, and then a motion vector indicating the above reduction or enlargement is detected. In this configuration, a fixed block is extracted from the reference picture portion to perform an interpolation or a thin-out operation, instead of a complex arithmetic operation such as required by a motion compensation method using affine motion model. Namely, after implementing a predetermined process on an extracted fixed picture portion, the extracted picture portion is compared with an input picture. The process is a simple and fixed one, so that this method can be applied only to a motion prediction of a picture such as simple reduction or enlargement of an input picture.
The conventional motion compensated prediction methods are constituted and implemented as described above.
In the first conventional motion compensated prediction method using block matching, formation of a prediction picture portion is implemented by translational motion of a macroblock from a reference picture. Thus, the process itself is simple. However, in this process, only the translational displacement of an object can be predicted, and prediction performance deteriorates when rotation, scaling up and down, or zooming of the object are involved in the motion.
On the other hand, in the second conventional motion compensated prediction method using the affine motion model, a prediction picture segment is formed using the affine motion model. Thus, when the motion of an object involves the more complicated types of motion such as rotation, this method can be applied. However, the operations needed for implementing the process according to this method are very complex and such a motion compensated predictor must be provided with a complex circuit having a large number of units.
In general, as the motion compensated prediction process becomes more simplified, prediction often becomes less accurate. In contrast, the motion compensated prediction using the affine motion model increases prediction accuracy at the expense of more complex and tedious operations.
As for a decoder, any concrete method performing a complex process with a conventional configuration has not been proposed.