The present invention relates to video coding. In particular, it relates to compression of video information using motion compensated prediction.
A video sequence consists of a large number video frames, which are formed of a large number of pixels each of which is represented by a set of digital bits. Because of the large number of pixels in a video frame and the large number of video frames even in a typical video sequence, the amount of data required to represent the video sequence quickly becomes large. For instance, a video frame may include an array of 640 by 480 pixels, each pixel having an RGB (red, green, blue) color representation of eight bits per color component, totaling 7,372,800 bits per frame. Video sequences comprise a sequence of still images, which are recorded/displayed at a rate of typically 15-30 frames per second. The amount of data needed to transmit information about each pixel in each frame separately would thus be enormous.
Video coding tackles the problem of reducing the amount of information that needs to be transmitted in order to present the video sequence with an acceptable image quality. For example, in videotelephony the encoded video information is transmitted using conventional telephone networks, where transmission bit rates are typically multiples of 64 kilobits/s. In mobile videotelephony, where transmission takes place at least in part over a radio communications link the available transmission bit rates can be as low as 20 kilobits/s.
In typical video sequences the change of the content of successive frames is to a great extent the result of the motion in the scene. This motion may be due to camera motion or due to motion of the objects present in the scene. Therefore typical video sequences are characterized by significant temporal correlation, which is highest along the trajectory of the motion. Efficient compression of video sequences usually takes advantage of this property of video sequences. Motion compensated prediction is a widely recognized technique for compression of video. It utilizes the fact that in a typical video sequence, image intensity/chrominance value in a particular frame segment can be predicted using image intensity/chrominance values of some other already coded and transmitted frame, given the motion trajectory between these two frames. Occasionally it is advisable to transmit a whole frame, to prevent the deterioration of image quality due to accumulation of errors and to provide additional functionalities, for example, random access to the video sequence).
A schematic diagram of an example video coding system using motion compensated prediction is shown in FIGS. 1 and 2 of the accompanying drawings. FIG. 1 illustrates an encoder 10 employing motion compensation and FIG. 2 illustrates a corresponding decoder 20. The operating principle of video coders using motion compensation is to minimize the prediction error frame En(x,y), which is the difference between the current fame In(x,y) being coded and a prediction frame Pn(x,y). The prediction error frame is thus
En(x,y)=In(x,y)xe2x88x92Pn(x,y).xe2x80x83xe2x80x83(1)
The prediction frame is built using pixel values of a reference frame Rn(x,y), which is one of the previously coded and transmitted frames (for example, a frame preceding the current frame), and the motion of pixels between the current frame and the reference frame. The motion of the pixels may be presented as the values of horizontal and vertical displacements xcex94x(x,y) and xcex94y(x,y) of a pixel at location (x,y) in the current frame In(x,y). The pair of numbers [xcex94x(x,y), xcex94y(x,y)] is called the motion vector of this pixel. The motion vectors are typically represented using some known functions (called basis functions) and coefficients (this is discussed in more detail below), and an approximate motion vector field ({tilde over (xcex94)}x(x,y), {tilde over (xcex94)}x(x,y)) can be constructed using the coefficients and the basis functions.
The prediction frame is given by
Pn(x,y)=Rn[x+{tilde over (xcex94)}x(x,y), y+{tilde over (xcex94)}y(x,t)],xe2x80x83xe2x80x83(2)
where the reference frame Rn(x,y) is available in the Frame Memory 17 of the encoder 10 and in the Frame memory 24 of the decoder 20 at a given instant. An information stream 2 carrying information about the motion vectors is combined with information about the prediction error (1) in the multiplexer 16 and an information stream (3) containing typically at least those two types of information is sent to the decoder 20.
In the Prediction Error Coding block 14, the prediction error frame En(x,y) is typically compressed by representing it as a finite series (transform) of some 2-dimensional functions. For example, a 2-dimensional Discrete Cosine Transform (DCT) can be used. The transform coefficients related to each function are quantized and entropy coded before they are transmitted to the decoder (information stream 1 in FIG. 1). Because of the error introduced by quantization, this operation usually produces some degradation in the prediction error frame En(x,y).
In the Frame Memory 24 of the decoder 20 there is a previously reconstructed reference frame Rn(x,y). Using the decoded motion information ({tilde over (xcex94)}x(x,y), {tilde over (xcex94)}y(x,y)) and Rn(x,y) it is possible to reconstruct the prediction frame Pn(x,y) in the Motion Compensated Prediction block 21 of the decoder 20. The transmitted transform coefficients of the prediction error frame En(x,y) are used in the Prediction Error Decoding block 22 to construct the decoded prediction error frame {tilde over (E)}n(x,y). The pixels of the decoded current frame Ĩn(x,y) are reconstructed by adding the prediction frame Pn(x,y) and the decoded prediction error frame {tilde over (E)}n(x,y)
Ĩn(x,y)=Pn(x,y)+{tilde over (E)}n(x,y)=Rn[x+{tilde over (xcex94)}r(x,y), y+{tilde over (xcex94)}y(x,y)]+{tilde over (E)}n(x,y).xe2x80x83xe2x80x83(3)
This decoded current frame may be stored in the Frame Memory 24 as the next reference frame Rn+l(x,y).
Let us next discuss in more detail the motion compensation and transmission of motion information. The construction of the prediction frame Pn(x,y) in the Motion Compensated Prediction block 13 of the encoder 10 requires information about the motion in the current frame In(x,y). Motion vectors [xcex94x(xc2x7)(x,y), xcex94y(x,y)] are calculated in the Motion Field Estimation block 11 in the encoder 10. The set of motion vectors of all pixels of the current frame [xcex94x(xc2x7), xcex94y(xc2x7)] is called the motion vector field. Due to the very large number of pixels in a frame it is not efficient to transmit a separate motion vector for each pixel to the decoder. Instead, in most video coding schemes the current frame is divided into larger image segments and information about the segments is transmitted to the decoder.
The motion vector field is coded in the Motion Field Coding block 12 of the encoder 10. Motion Field Coding refers to representing the motion in a frame using some predetermined functions or, in other words, representing it with a model. Almost all of the motion vector field models commonly used are additive motion models. Motion compensated video coding schemes may define the motion vectors of image segments by the following general formula:                               Δ          ⁢                      xe2x80x83                    ⁢                      x            ⁡                          (                              x                ,                y                            )                                      =                              ∑                          i              =              0                                      N              -              1                                ⁢                      xe2x80x83                    ⁢                                    a              i                        ⁢                                          f                i                            ⁡                              (                                  x                  ,                  y                                )                                                                        (        4        )                                          Δ          ⁢                      xe2x80x83                    ⁢                      y            ⁡                          (                              x                ,                y                            )                                      =                              ∑                          i              =              0                                      M              -              1                                ⁢                      xe2x80x83                    ⁢                                    b              i                        ⁢                                          g                i                            ⁡                              (                                  x                  ,                  y                                )                                                                        (        5        )            
where coefficients ai and bi are called motion coefficients. They are transmitted to the decoder. Functions fi and gi are called motion field basis functions, and they are known both to the encoder and decoder.
In order to minimize the amount of information needed in sending the motion coefficients to the decoder, coefficients can be predicted from the coefficients of the neighboring segments. When this kind of motion field prediction is used, the motion field is expressed as a sum of a prediction motion field and refinement motion field. The prediction motion field uses the motion vectors associated with neighboring segments of the current frame. The prediction is performed using the same set of rules and possibly some auxiliary information in both encoder and decoder. The refinement motion field is coded, and the motion coefficients related to this refinement motion field are transmitted to the decoder. This approach typically results in savings in transmission bit rate. The dashed lines in FIG. 1 illustrate some examples of the possible information some motion estimation and coding schemes may require in the Motion Field Estimation block 11 and in the Motion Field Coding block 12.
Polynomial motion models are a widely used family of models. (See, for example H. Nguyen and E. Dubois, xe2x80x9cRepresentation of motion information for image coding,xe2x80x9d in Proc. Picture Coding Symposium ""90, Cambridge, Massa., Mar. 26-18, 1990, pp. 841-845 and Centre de Morphologie Mathematique (CMM), xe2x80x9cSegnentation algorithm by multicriteria region merging,xe2x80x9d Document SIM(95)19, COST 211ter Project Meeting, May 1995). The values of motion vectors are described by functions which are linear combinations of two dimensional polynomial functions. The translational motion model is the simplest model and requires only two coefficients to describe the motion vectors of each segment The values of motion vectors are given by the formulae:
xcex94x(x,y)=a0
xcex94y(x,y)=b0xe2x80x83xe2x80x83(6)
This model is widely used in various international standards (ISO MPEG-1, MPEG-2, MPEG-4, ITU-T Recommendations H.261 and H.263) to describe motion of 16xc3x9716 and 8xc3x978 pixel blocks. Systems utilizing a translational motion model typically perform motion estimation at full pixel resolution or some integer fraction of full pixel resolution, for example with an accuracy of xc2xd or ⅓ pixel resolution.
Two other widely used models are the affine motion model given by the equation:
xcex94x(x,y)=a0+aix+a2y
xcex94y(x,y)=b0+b1x+b2yxe2x80x83xe2x80x83(7)
and the quadratic motion model given by the equation:
xcex94x(x,y)=a0+a1x+a+2y+a3ry+a4x2+a5y2
xcex94y(x,y)=b0+b1x+b2y+b3xy+b4x2+b5y2xe2x80x83xe2x80x83(8)
The affine motion model presents a very convenient trade-off between the number of motion coefficients and prediction performance. It is capable of representing some of the common real-life motion types such as translation, rotation, zoom and shear with only a few coefficients. The quadratic motion model provides good prediction performance, but it is less popular in coding than the affine model, since it uses more motion coefficients, while the prediction performance is not substantially better. Furthermore, it is computationally more costly to estimate the quadratic motion than to estimate the affine motion.
When the motion field is estimated using higher order motion models (such as presented, for example, in equations 7 and 8), the motion field estimation results in a motion field represented by real numbers. In this case the motion coefficients need to be quantized to a discrete accuracy before they are transmitted to the decoder.
The Motion Field Estimation block 11 calculates motion vectors [xcex94x(x,y), xcex94y(x,y)] of the pixels of a given segment Sk which minimize some measure of prediction error in the segment. In the simplest case the motion field estimation uses the current frame In(x,y) and the reference frame Rn(x,y) as input values. Typically the Motion Field Estimation block outputs the motion field [xcex94x(x,y), xcex94y(x,y)] to the Motion Field Coding block 12. The Motion Field Coding block makes the final decisions on what kind of motion vector field is transmitted to the decoder and how the motion vector field is coded. It can modify the motion model and motion coefficients in order to minimize the amount of information needed to describe a satisfactory motion vector field.
The image quality of transmitted video frames depends on the accuracy with which the prediction frame can be constructed, in other words on the accuracy of the transmitted motion information, and on the accuracy with which the prediction error information is transmitted. Here the term accuracy refers not only to the ability of the emotion field model to represent the motion within the frame but also to the numerical precision with which the motion information and the prediction error information is represented. Motion information transmitted with which accuracy may be canceled out in the decoding phase due to low accuracy of the precidiction error frame, or vice versa.
Current video coding systems employ various motion estimation and coding techniques, as discussed above. The accuracy of the motion information and the transmission bit rate needed to transmit the motion information are typically dictated by the choice of the motion estimation and coding technique, and a chosen technique is usually applied to a whole video sequence. Generally, as the accuracy of the transmitted motion information increases, the amount of transmitted information increases.
In general, better image quality requires larger amounts of transmitted information. Typically, if the available transmission bit rate is limited, this limitation dictates the best possible image quality of transmitted video frames. It is also possible to aim for a certain target image quality, and the transmission bit rate then depends on the target image quality. In current video coding and decoding systems, the trade-offs between the required transmission bit rate and image quality are mainly made by adjusting the accuracy of the information presenting the prediction error frame. This accuracy may change, for example, from frame to frame, or even between different segments of a frame.
The problem in changing the accuracy of the transmitted prediction error frame is that it may cause unexpected degradation of the overall performance of the video encoding, for example, when conforming to a new available transmission bit rate. In other words, the image quality achieved is not as good as that expected considering the transmission bit rate. The image quality may deteriorate drastically, when a lower transmission bit rate is available, or the image quality may not be enhanced even though a higher transmission bit rate is used.
The object of the invention is to provide a flexible and versatile motion compensated method for encoding/decoding video information. A further object of the invention is to provide a method that ensures good transmitted video quality for various transmission bit rates. A further object is that the method may employ various motion estimation and coding techniques.
These and other objects of the invention are achieved by selecting the qantization accuracy of the motion coefficients so that the accuracy of the transmitted motion information is compatible with the accuracy of the prediction error information.
A method according to the invention is a method for encoding video information, comprising the following steps of:
estimating the motion of picture elements between a piece of reference video information and a piece of current video information,
modeling the motion of picture elements using a certain set of basis functions and certain motion coefficients,
defining a certain set of quantizers,
selecting, based on a certain predetermined selection criterion, a motion coefficient quantizer from the set of quantizers, and
quantizing the motion coefficients using the selected motion coefficient quantizer.
In a method according to the invention, the motion of picture elements between a certain piece of reference video information and a piece of current video information is estimated. The resulting motion vector field is represented using certain basis functions and motion coefficients. The basis functions are known both to the encoder and decoder, so the transmission of said coefficients enables the decoder to determine an estimate for the motion of picture elements. Typically the coefficient values are real numbers, and therefore quantization is needed in order to present the coefficients to a certain discrete accuracy using a certain number of bits. The coefficients are quantized before transmission to the decoder, or before using them in constructing a prediction frame.
In a method according to the invention, a set of quantizers is provided. Here the term quantizer refers to a function mapping real numbers to certain reconstruction values. For each reconstruction value there is a quantization interval determining the range of real numbers which are mapped/quantized to said reconstruction value. The size of the quantization intervals can be, for example, the same for each reconstruction value (uniform quantizer) or the size of the quantization interval can be different for each reconstruction value (nonuniform quantizer). The quantization interval determines the accuracy with which the coefficients are represented. The quantizers in the set may all be similar so that the reconstruction values and the quantization intervals are scaled from quantizer to quantizer using a certain parameter. It is also possible that the set of quantizers comprises different types of quantizer, both uniform and non-uniform, for example. Quantizers are further discussed in the detailed description of the invention.
The selection criterion for the quantizer in a method according to the invention can be, for example, the target image quality or the target transmission bit rate. The selection of a quantizer may also be bound to some other variable, which depends on the target image quality or the transmission bit rate. A new quantizer can be selected, for example, each time the target transmission bit rate changes. For various parts of a video frame, for example, it is possible the use different quantizers.
According to the invention, when the target image quality changes, it is possible to adjust the accuracy with which both the prediction error information and the motion information is encoded and transmitted. Therefore, for each image quality or transmission bit rate, it is possible to obtain good overall coding performance. It is possible, for example, to adjust both the accuracy of the prediction error information and that of the motion information to be able to transmit the encoded video stream using a certain bit rate. It is also possible, for example, that the accuracy of the prediction error information is dictated by the target image quality, and in the method according to the invention the accuracy of the motion information is adjusted to be compatible with the prediction error accuracy: quantizaton should not be too fine, because the motion information cannot enhance image quality beyond the accuracy provided by the prediction error information, but not too coarse either, because coarsely quantized motion information may deteriorate the image quality provided by the prediction error information.
A method according to the invention does not pose any restrictions on the motion field estimation or motion field coding techniques used to obtain the motion coefficients. It is therefore applicable with any such techniques. For example, motion model adaptation may be used to avoid the cost of using an over accurate motion field estimation and coding technique by providing a selection of motion field estimation and/or coding techniques providing various accuracies. An appropriate motion field estimation and/or coding technique can then be selected based on the target image quality or target bit rate, and fine-tuning between the prediction error information accuracy and the motion information accuracy can be performed by selecting a proper quantizer.
The invention can be straightforwardly applied to existing motion compensated video coding methods and systems. In such prior-art systems, the quantization of motion coefficients is typically done to a certain, predetermined accuracy, which works well for a certain target image quality. When the target image quality or the available transmission bit rate differs remarkably from the designed image quality, the video encoder produces worse image quality than that which could be achieved for a given transmission bit rate.
This effect can be eliminated by selecting a more appropriate quantizer for the motion coefficients according to the invention.
The invention also relates to a method for decoding encoded video information, comprising the following steps of:
receiving quantized motion coefficients describing motion of picture elements,
defining a set of inverse quantizers,
determining a selected motion coefficient quantizer using which the motion coefficients are quantized
performing inverse quantization of the quantized motion coefficients using an inverse quantizer corresponding to the selected motion coefficient quantizer,
determining the motion of the picture elements using the inverse quantized motion coefficients and certain basis functions, and
determining a piece of prediction video information using a piece of reference video information and the determined motion of the picture elements.
An encoder according to the invention is an encoder for performing motion compensated encoding of video information, comprising:
means for receiving a piece of current video information,
memory means for storing a piece of reference video information,
motion field estimation means for estimating a motion field of picture elements in the piece of current video information using at least the piece of reference video information,
motion field coding means, which comprise
means for producing motion coefficients describing the estimated motion field,
first selection means for selecting a quantizer from a set of quantizers, said first selection means having an input to receive information indicating a selection criterion and an output to send information indicating the selected quantizer, and
quantization means for quantizing motion coefficients using the selected quantizer, said quantization means having an input to receive information indicating the selected quantizer, a second input to receive the motion coefficients, and an output to send the quantized motion coefficients, and
motion compensated prediction means, which means comprise
second selection means for selecting an inverse quantizer from a set of inverse quantizers, said second selection means having an input to receive information indicating a selection criterion and an output to send information indicating the selected inverse quantizer,
inverse quantization means for inversely quantizing the quantized motion coefficients using the selected inverse quantizer, said quantization means having an input to receive the quantized motion coefficients, a second input to receive information indicating the selected inverse quantizer and an output to send the inverse quantized motion coefficient, and
means for determining a piece of prediction video information using at least the piece of reference video information and the inverse quantized motion coefficients.
The invention relates further to a decoder for performing the decoding of encoded video information, comprising:
memory means for storing a piece of reference video information,
input means for receiving quantized motion coefficients, and
motion compensated prediction means, which comprise
selection means for selecting an inverse quantizer from a set of inverse quantizers, said second selection means having an input to receive information indicating a selection criterion and an output to send information indicating the selected inverse quantizer,
inverse quantization means for inversely quantizing the quantized motion coefficients using the selected inverse quantizer, said quantization means having an input to receive the quantized motion coefficients, a second input to receive information indicating the selected inverse quantizer and an output to send the inverse quantized motion coefficient, and
prediction means for determining a piece of prediction video information using at least the piece of reference video information and the inverse quantized motion coefficients.
In one advantageous embodiment of the invention, the encoder and decoder are combined to form a codec. The motion compensated prediction parts of the encoder and decoder are similar, and they may be provided by a common part, which is arranged to operate as a part of the encoder and a part of the decoder, for example, alternatingly.
The invention relates further to a computer program element for performing motion compensated encoding of video information, comprising:
means for receiving a piece of current video information,
memory means for storing a piece of reference video information,
motion field estimation means for estimating a motion field of picture elements in the piece of current video information using at least the piece of reference video information,
motion field coding means, which comprise
means for producing motion coefficients describing the estimated motion field,
first selection means for selecting a quantizer from a set of quantizers, said first selection means hang an input to receive information indicating a selection criterion and an output to send information indicating the selected quantizer, and
quantization means for quantizing motion coefficients using the selected quantizer, said quantization means having an input to receive information indicating the selected quantizer, a second input to receive the motion coefficients, and an output to send the quantized motion coefficients, and
motion compensated prediction means, which means comprise
second selection means for selecting an inverse quantizer from a set of inverse quantizers, said second selection means having an input to receive information indicating a selection criterion and an output to send information indicating the selected inverse quantizer,
inverse quantization means for inversely quantizing the quantized motion coefficients using the selected inverse quantizer, said quantization means having an input to receive the quantized motion coefficients, a second input to receive information indicating the selected inverse quantizer and an output to send the inverse quantized motion coefficient, and
means for determining a piece of prediction video information using at least the piece of reference video information and the inverse quantized motion coefficients.
A second computer program element according to the invention is a computer program element for performing the decoding of encoded video information, comprising:
memory means for storing a piece of reference video information,
input means for receiving quantized motion coefficients, and
motion compensated prediction means, which comprise
selection means for selecting an inverse quantizer from a set of inverse quantizers, said second selection means having an input to receive information indicating a selection criterion and an output to send information indicating the selected inverse quantizer,
inverse quantization means for inversely quantizing the quantized motion coefficients using the selected inverse quantizer, said quantization means having an input to receive the quantized motion coefficients, a second input to receive information indicating the selected inverse quantizer and an output to send the inverse quantized motion coefficient, and
prediction means for determining a piece of prediction video information using at least the piece of reference video information and the inverse quantized motion coefficients.
According to one advantageous embodiment of the invention, a computer program element as specified above is embodied on a computer readable medium.
The novel features which are considered as characteristic of the invention are set forth in particular in the appended claims. The invention itself, however, both as to its construction and its method of operation, together with additional objects and advantages thereof, will be best understood from the following description of specific embodiments when read in connection with the accompanying drawings.