Current video coders (MPEG, H264, etc.) use a block-wise representation of the video sequence. The images are cut up into macro-blocks, each macro-block is itself cut up into blocks and each block, or macro-block, is coded by intra-image or inter-image prediction. Thus, certain images are coded by spatial prediction (intra prediction), while other images are coded by temporal prediction (inter prediction) with respect to one or more coded-decoded reference images, with the aid of a motion compensation known by the person skilled in the art. Moreover, for each block can be coded a residual block corresponding to the original block decreased by a prediction. The coefficients of this block are quantized after optional transformation, and then coded by an entropy coder.
Intra prediction and inter prediction require that certain blocks which have been previously coded and decoded are available, so as to be used, either at the decoder or at the coder, to predict the current block. A schematic example of such a predictive coding is represented in FIG. 1A, in which an image IN is divided into blocks, a current block MBi of this image being subjected to a predictive coding with respect to a predetermined number of three blocks MBr1, MBr2 and MBr3 previously coded and decoded, such as designated by the hatched arrows. The aforementioned three blocks specifically comprise the block MBr1 situated immediately to the left of the current block MBi, and the two blocks MBr2 and MBr3 situated respectively immediately above and to the above right of the current block MBi.
Of more particular interest here is the entropy coder. The entropy coder encodes the information in its order of arrival. Typically a row-by-row traversal of the blocks is carried out, of “raster-scan” type, as illustrated in FIG. 1A by the reference PRS, starting from the block at the top left of the image. For each block, the various items of information necessary for the representation of the block (type of block, mode of prediction, residual coefficients, etc.) are dispatched sequentially to the entropy coder.
An effective arithmetical coder of reasonable complexity is already known, called “CABAL” (“Context Adaptive Binary Arithmetic Coder”), introduced into the AVC compression standard (also known as ISO-MPEG4 part 10 and ITU-T H.264).
This entropy coder implements various concepts:                arithmetical coding: the coder, such as described initially in the document J. Rissanen and G. G. Langdon Jr, “Universal modeling and coding,” IEEE Trans. Inform. Theory, vol. IT-27, pp. 12-23, January 1981, uses, to code a symbol, a probability of occurrence of this symbol;        adaptation to context: this entails adapting the probability of occurrence of the symbols to be coded. On the one hand, on-the-fly learning is carried out. On the other hand, according to the state of the previously coded information, a specific context is used for the coding. To each context there corresponds an inherent probability of occurrence of the symbol. For example a context corresponds to a type of coded symbol (the representation of a coefficient of a residual, signaling of coding mode, etc.) according to a given configuration, or a state of the neighborhood (for example the number of “intra” modes selected from the neighborhood, etc.);        binarization: the symbols to be coded are cast into the form of a string of bits. Subsequently, these various bits are successively dispatched to the binary entropy coder.        
Thus, this entropy coder implements, for each context used, a system for learning probabilities on the fly with respect to the previously coded symbols for the context considered. This learning is based on the order of coding of these symbols. Typically, the image is traversed according to an order of “raster-scan” type, described hereinabove.
During the coding of a given symbol b that may equal 0 or 1, the learning of the probability Pi of occurrence of this symbol is updated for a current block MBi in the following manner:
            P      i        ⁡          (              b        =        0            )        =            α      ·                        p                      i            -            1                          ⁡                  (                      b            =            0                    )                      +          {                                                  (                              1                -                α                            )                                                          if              ⁢                                                          ⁢              coded              ⁢                                                          ⁢              bit              ⁢                                                          ⁢              is              ⁢                                                          ⁢              0                                                            0                                otherwise                              where α is a predetermined value, for example 0.95 and Pi-1 is the probability of symbol occurrence calculated during the last occurrence of this symbol.
A schematic example of such an entropy coding is represented in FIG. 1A, in which a current block MBi of the image IN is subjected to an entropy coding. When the entropy coding of the block MBi begins, the symbol occurrence probabilities used are those obtained after coding of a previously coded and decoded block, which is that which immediately precedes the current block MBi in accordance with the aforementioned row-by-row traversal of the blocks of “raster scan” type. Such a learning based on block-by-block dependency is represented in FIG. 1A for certain blocks only for the sake of clarity of the figure, by the thin-line arrows.
The drawback of such a type of entropy coding resides in the fact that during the coding of a symbol situated at the start of a row, the probabilities used correspond mainly to those observed for the symbols situated at the end of the previous row, having regard to the “raster scan” traversal of the blocks. Now, on account of the possible spatial variation of the probabilities of the symbols (for example for a symbol related to an item of motion information, the motion situated in the right part of an image may be different from that observed in the left part and therefore likewise for the local probabilities stemming therefrom), a lack of local appropriateness of the probabilities may be observed, with the risk of causing a loss of effectiveness during coding.
The document “Annex A: CDCM Video Codec Decoder Specification” available at the Internet address wftp3.itu.int/av-arch/jctvc-site/2010_04_A_DresdenaCTVC-A114-AnnexA.doc (on 8 Feb. 2011) describes a coding method which alleviates the drawback mentioned above. The coding method described in the above document comprises, as illustrated in FIG. 1B:                a step of cutting an image IN into a plurality of blocks,        a step of predictive coding of a current block MBi of this image with respect to a predetermined number of three blocks MBr1, MBr2 and MBr3 previously coded and decoded, such as designated by the hatched arrows. The aforementioned three blocks specifically comprise the block MBr1 situated immediately to the left of the current block MBi, and the two blocks MBr2 and MBr3 situated respectively immediately above and to the above right of the current block MBi,        a step of entropy coding of the blocks of the image IN, according to which each block uses the probabilities of symbol occurrence calculated respectively for the coded and decoded block which is situated immediately above the current block and for the coded and decoded block which is situated immediately to the left of the current block, when these blocks are available. This use of the probabilities of symbol occurrence is represented partially in FIG. 1B, for the sake of clarity of the latter, by the thin-line arrows.        
The advantage of such an entropy coding is that it exploits the probabilities arising from the immediate environment of the current block, thereby making it possible to achieve higher coding performance. Furthermore, the coding technique used makes it possible to code in parallel a predetermined number of pairwise neighboring subsets of blocks. In the example represented in FIG. 1B, three subsets SE1, SE2 and SE3 are coded in parallel, each subset consisting in this example of a row of blocks, represented dashed. Of course, such a coding requires that the blocks situated respectively above and above to the right of the current block be available.
A drawback of this parallel coding technique is that, to allow access to a probability of symbol occurrence calculated for the block situated immediately above the current block, it is necessary to store a quantity of probabilities associated with a row of blocks. If the second row of blocks SE2 is considered for example in FIG. 1B, the first block of this row is subjected to an entropy coding by using the probabilities of symbol occurrence calculated for the first block of the previous first row SE1. On completion of the coding of the first block of the second row, the state of the value V1 of probability of occurrence is stored in a buffer memory MT. The second block of the second row SE2 is thereafter subjected to an entropy coding by using the probabilities of symbol occurrence calculated at one and the same time for the second block of the first row SE1 and the first block of the second row SE2. On completion of the coding of the second block of the second row, the state of the value V2 of probability of occurrence is stored in the buffer memory MT. This procedure is undertaken until the last block of the second row SE2. Since the quantity of probabilities is very large (there exist as many probabilities as the combination of the number of syntax elements with the number of associated contexts), the storage of these probabilities over an entire row is expensive in terms of memory resources.