The invention relates to a method of encoding images represented by digital signals organized in luminance and chrominance blocks which may themselves be regrouped in macroblocks, particularly comprising:
a first step of quantizing said signals, PA1 a second step of variable-length encoding the signals thus quantized, whereafter the encoded signals are stored in a buffer memory, the quantization step used during the first step being computed on the basis of a feedback parameter which is a decreasing function of the filling level of the buffer memory, and a multiplicative correction factor in the form of a feedforward parameter which is a function of the quantity of information in the image. PA1 c is an average value of the complexity a in an image, PA1 n represents a parameter for adjusting the variation limits of said chrominance factor. PA1 m represents the average value of the corresponding chrominance component in the current block or macroblock, PA1 g represents the value of this chrominance component corresponding to a grey pixel. PA1 a module for quantizing the digital signals corresponding to images, PA1 a module for variable-length encoding of the signals thus quantized, PA1 a buffer memory, PA1 and, arranged between the buffer memory and the quantization module, a module for bitrate control of the buffer memory output, comprising means for modifying the quantization step with the aid of a feedback parameter which is a decreasing function of the filling level of the buffer memory and with the aid of a multiplicative correction factor in the form of a feedforward parameter which is a function of the quantity of information in the image and is provided by a weighting module.
Such a method is in conformity with the project for the MPEG2 standard (Moving Pictures Expert group) of ISO and is particularly used in the field of transmitting and storing images.
The document published by ISO under the reference "ISO-IEC/JCT1/SC29/WG11; Test Model 4.2" in February 1993 describes a process of bitrate control of the buffer memory output. It consists of dividing, within the image, the variations of the number of binary elements allocated for encoding each block or macroblock as a function of the quantity of information in each block or macroblock with respect to the average number. The more a block or macroblock contains information components, the fewer encoding faults are apparent. The quantization step may be augmented without entailing a very considerable loss of quality. When, in contrast, a block or macroblock contains very little information, it will be necessary to use a fine quantization so as to prevent this information from getting lost.
The MPEG encoding structure will be briefly described hereinafter. A digital image may be represented by an assembly of three matrices comprising groups of eight bits: one luminance matrix and two chrominance matrices. These matrices are divided into blocks of 8.times.8 pixels so that four adjacent blocks of a luminance matrix correspond to one block for each chrominance matrix. The six blocks thus obtained form a macroblock. The macroblock is the basic unit which is used for estimating and compensating motion, and for choosing the quantization step. A macroblock header thus comprises the value of the quantization step used for the quantizer. Several macroblocks are subsequently regrouped in a slice, while several slices form an image, and several images are regrouped in a group of pictures or GOP and several GOPs form a sequence. A sequence header particularly comprises a quantization matrix, of 8.times.8 size, used for quantizing each block of the sequence when this matrix is new with respect to that used in the preceding sequence and when it does not belong to the assembly of matrices predefined by the standard, in which case it is sufficient to indicate which one is used.
An encoding device with which the method as described in the above document can be carried out is shown in FIG. 1. It comprises in series a DCT (Discrete Cosine Transform) module 15, a module 20 for quantizing the DCT coefficients thus obtained, a module 30 for variable-length encoding of the coefficients thus quantized and a buffer memory 40, a first output of which is connected to the output 41 of the device. In the embodiment described, the device also comprises a prediction branch connected to the output of the quantization module 20 and comprising an inverse quantization module 50 and an inverse DCT module 60 whose output is connected to a first input of a prediction module 70 which is connected to the input of the DCT module 15. A second input of this prediction module 70 is connected to the input 71 of the device. Moreover, a second output of the buffer memory 40 is connected to a bitrate control module 80 to which it supplies a feedback parameter related to the filling level of the buffer memory. On the other hand, the input 71 of the device is connected to the input of a weighting module 90a whose output is connected to the bitrate control module 80 to which it supplies a feedforward parameter. The output of the bitrate control module 80 is connected to the quantization module 20.
With the prediction branch it is possible not to encode the temporal redundance in the images: for each incoming macroblock the prediction module 70 evaluates a prediction macroblock on the basis of blocks of images which have previously been transmitted and which are supplied to the input of the prediction module 70 after passage through the inverse quantization module 50 and the inverse DCT module 60. Subsequently, it compares them so as to determine whether it is more interesting to encode either the original macroblock or the difference between the original macroblock and the predicted macroblock. The DCT module 15 treats the blocks of 8.times.8 pixels. As soon as the DCT coefficients are obtained, they are quantized by the quantization module 20 as a function of a quantization step provided by the bitrate control module 80. The quantization operates as follows: EQU C.sub.dctQi =C.sub.dcti /(W.sub.i .times.Q2)
where C.sub.dcti, C.sub.dctQi, W.sub.i, and Q2 are the 1st transform coefficient with a quantized value, the 1st coefficient of the quantization matrix W used for the current sequence, and the quantization step used. Thus, the higher the value of the quantization step Q2, the coarser the quantization and the less precise the coefficients obtained during decoding.
With the transform coefficients obtained being quantized, they are subsequently encoded by the variable-length encoding module 30 and applied to the buffer memory 40. To control the bitrate of this buffer memory 40, the bitrate control module 80 varies, for each macroblock, the quantization step Q2 whose value is transmitted to the decoder in the header of the macroblock. This variation is realised as a function of two parameters.
The feedback parameter, which is related to the filling level of the buffer memory 40, provides the possibility of computing a first value Q1 of the quantization step which is larger as the filling level of the buffer memory is higher. A mode for computing Q1 is described in the above-mentioned project for the standard.
The feedforward parameter, denoted P, which is supplied by the weighting module 90a, enables the bitrate control module 80 to modify this first value Q1 so as to take the contents of the image into account. The quantization step Q2 thus obtained is equal to: EQU Q2=Q1.times.P
In this known device, the weighting module 90a is constituted by a module 94 for estimating the quantity of information in the macroblock to be encoded with respect to the average number computed in an image. The feedforward parameter provided by this weighting module 90a is thus equal to a factor F.sub.Y, referred to as the luminance factor, which is smaller as this quantity of information is also smaller. It is expressed in the following manner: ##EQU1## where a.sub.Y, c.sub.Y and n.sub.Y are the quantity of information in the macroblock, the average quantity of information in a macroblock computed in the preceding image, and a fixed parameter for adjusting the variation limits of the quantization step (Q1/n.sub.Y &lt;Q2&lt;(n.sub.y .times.Q1). The value of n.sub.Y is preferably chosen to be about 2 with which a range of variations which is large enough is obtained while maintaining a satisfactory image quality. The quantity a.sub.y of information components in a macroblock is given by the minimum value of the variance computed in each block of a field D, preferably constituted by the current block or macroblock and by the directly contiguous blocks. The variance of a luminance block B is defined by the following expression: ##EQU2## in which Var indicates the variance, N represents the number of pixels in the luminance block B, and x.sub.i,j denotes their luminance.
Thus, the quantity a.sub.Y of information components in a block or macroblock is equal to: EQU a.sub.Y =1+Min.sub.B.sbsb.k.sub..epsilon.D [Var(B.sub.k)]
where B.sub.k represents the blocks of the field D.
However, great encoding experience has been gained in different test sequences and it appears that the human eye is particularly sensitive to encoding faults in zones where the image or one of the chrominance components U or V is largely saturated.
It is an object of the present invention to take this characteristic of the human eye into account.