Present-day video encoders (MPEG, H264, etc) use a blockwise representation of the video sequence. The images are subdivided into macro-blocks, and each macro-block is itself subdivided into blocks and each block or macro-block is encoded by intra-image or inter-image prediction. Thus, I images are encoded by spatial prediction (intra-prediction), P and B images are encoded by temporal prediction relatively to other I, P or B images encoded/decoded by means of motion compensation. Furthermore, for each block, there is encoded a residual block corresponding to the original block minus a prediction. The coefficients of this block are quantified after possible transformation and then encoded by an entropy encoder.
The emphasis here is more particularly on the entropy encoder. The entropy encoder encodes information in their order of arrival. Typically, a 1 raster-scan type of line-by-line scan of the blocks, as illustrated in FIG. 1, starting from the block at the top left of the image. For each block, the different pieces of information needed for the representation of the block (type of block, prediction mode, residue coefficients etc) are sent sequentially to the entropy encoder.
There is already a known arithmetic encoder that is efficient and is of reasonable complexity known as the CABAC (“Context Adaptive Binary Arithmetic Coder”), introduced in the AVC compression standard (also known as the ISO-MPEG4, part 10 and ITU-T H.264).
This entropy encoder implements different concepts:                arithmetic encoding: the encoder as first described in the document: J. Rissanen and G. G. Langdon Jr, “Universal modeling and coding”, IEEE Trans. Inform. Theory, vol. IT-27, pp. 12-23, Jan. 1981, uses, to encode a symbol, a probability of appearance of this symbol;        the matching to the context: what is done here is to adapt the probability of appearance of the symbols to be encoded. On the one hand, on-the-fly learning is achieved. On the other hand, depending on the state of the preliminarily encoded information, a specific context is used for the encoding. Each context has a corresponding probability of appearance proper to the symbol. For example, a context corresponds to a type of encoded symbol (the representation of a coefficient of a residue, the signaling of an encoding mode, etc) and according to a given configuration, or a state of the neighborhood (for example the number of “intra” modes selected in the neighborhood, etc);        binarisation: symbols to be encoded are put into the form of a sequence of bits. Thereafter, these different bits are sent successively to the binary entropy encoder.        
Thus, this entropy encoder, for each context used, implements a system of on-the-fly learning of the probabilities relatively to the symbols previously encoded for the context considered. This learning is based on the order of encoding of these symbols. Typically, the image is scanned in a “raster-scan” type of order as described here above.
Owing to the use of such a scanning order, this entropy encoder has several drawbacks.
Indeed, a lack of local matching of the learning of the probabilities can be seen, due to the type of scan. For example, during the encoding of a symbol situated at the beginning of a line, the probabilities used correspond chiefly to those observed for the symbols situated at the end of the previous line. Now, because of the possible spatial variation of the probabilities of the symbols (for example for a symbol related to a piece of motion information, the motion situated at the right-hand part of an image can be different from that observed on the left hand and, therefore, this can be the case also for the local probabilities that result therefrom), this lack of local matching of the probabilities leads to a loss in efficiency during the encoding.
To limit this phenomenon, proposals have been made to modify the order of scanning of the blocks in order to improve local consistency, but the encoding and the decoding remain sequential.
This is a second drawback of this type of entropy encoder. Indeed, since the encoding and decoding of a symbol depend on the state of the probability learned until then, the decoding of the symbols can be done only in the same order as that used during the encoding. Typically, the decoding therefore can only be sequential, thus preventing a parallel decoding of several symbols (for example to benefit from multicore architectures).
The patent document US2009168868A1, entitled “Systems and apparatuses for performing CABAC parallel encoding and decoding” describes a method enabling a type of parallelization of the encoding and/or of the decoding in a CABAC type encoder.
In a first embodiment described in this document, certain steps implemented in the encoder can be performed in parallel, for example the binarisation and the definition of the contexts, for a same syntax element or symbol. This embodiment does not enable a true parallelism of decoding of several symbols because the sequential decoding constraint remains present.
According to a second embodiment, an image is subdivided into slices, and the CABAC encoder is initialized at each start of a slice, in order to optimize the initialization of the probabilities. This embodiment enables a parallel decoding of the slices but the initialization of the decoder at each new slice greatly reduces the performance of the decoder. The lack of local matching of the probabilities is therefore further reinforced here.
There is therefore a need for a novel technique of entropy encoding of a sequence of symbols representing an image or a series of images making it possible especially to offer a non-sequential decoding of the encoded symbols without any loss of compression performance.