Digital images and image sequences occupy a great deal of memory space, and this means that it is necessary when transmitting these images, to compress them in order to avoid problems of congestion in the communications network used for this transmission. Indeed, the bit rate used on this network is generally limited.
There already exist numerous video data compression techniques. Among them, the H.264 technique implements a prediction of pixels of a current image relatively to other pixels belonging to the same image (intra prediction) or to a previous or following image (inter prediction).
More specifically, according to this H.264 technique, I images are encoded by spatial prediction (intra prediction) and P and B images are encoded by temporal prediction relatively to other I, P or B images (inter prediction) encoded/decoded by means of motion compensation for example.
To this end, the images are sub-divided into macroblocks which are then sub-divided into blocks. Each block or macroblock is encoded by intra or inter image prediction.
Classically, the encoding of the current block is done by means of a prediction of the current block, called a predicted block, and a prediction residue, corresponding to a difference between the current block and the predicted block. This prediction residue, also called a residual block, is transmitted to the decoder which rebuilds the current block in adding this residual block to the prediction.
The prediction of the current block is done by means of already rebuilt information (preceding blocks already encoded/decoded in the current image, images preliminarily encoded in the context of a video encoding etc). The residual block obtained is then transformed for example by using a DCT (discrete cosine transform) type of transform. The coefficients of the transformed residual block are then quantified, and then encoded by entropy encoding.
The decoding is done image by image and, for each image, it is done block by block or macroblock by macroblock. For each (macro)block, the corresponding elements of the stream are read. The inverse quantification and the inverse transformation of the coefficients of the residual block or blocks associated with the (macro)block are done. Then, the prediction of the (macro)block is computed and the (macro)block is rebuilt by adding the prediction to the decoded residual block or blocks.
According to this compression technique, transformed, quantified and encoded residual blocks are therefore transmitted to the decoder to enable it to rebuild the original image or images.
Unfortunately, during this transmission, it is possible that certain coefficients of these residual blocks will be deteriorated or get lost especially when the transmission is noisy. The use of these residual blocks that get “deteriorated” during the rebuilding of the original image therefore leads to an image of poor quality.
To overcome this problem, it has been proposed especially to randomly restore lost coefficients or sets of lost coefficients in a wavelet-transformed or DCT-transformed block. The coefficients thus restored enable an acceptable image to be rendered to the decoder.
According to these techniques, the position of the lost coefficients is detected in a first phase. Then, in a second phase, these coefficients are restored as a function of the neighborhood (other coefficients or spatial neighborhood in the rebuilt image).
The restoring of the coefficients is therefore implemented at the decoder. In other words, these techniques propose a post-treatment operation aimed at rebuilding a signal that is acceptable to the decoder.
One drawback of these techniques of restoration is that they are costly in terms of resources for the decoder. Indeed, the decoder must first of all identify the position of the lost coefficients before being able to start the restoring phase. The decoder must therefore either proceed “blindly” or have an identification module to identify the position of the lost coefficients to be able to restore the coefficients.
There is therefore a need for a novel technique for encoding/decoding images enabling an improvement in the quality in the rebuilt signal while at the same time simplifying the processing implemented at the level of the decoder.