The HEVC standard currently being developed, which is described in the document “B. Bross, W.-J. Han, J.-R. Ohm, G. J. Sullivan, and T. Wiegand, “High efficiency video coding (HEVC) text specification draft 9,” document JCTVC-K1003 of JCT-VC, Shanghai, CN, Oct. 10-19, 2012” is similar to the preceding H.264 standard, in that it uses block partitioning of the video sequence. The HEVC standard is distinguished from the H.264 standard, however, by the fact that the implemented partitioning complies with a tree structure called “quadtree”. To that end, as shown in FIG. 1, a current image IN is partitioned a first time into a plurality of square blocks CTB1, CTB2, . . . , CTBi, . . . , CTBL, for example of size 64×64 pixels (1≤i≤L). For a given block CTBi, this block is considered to be the root of a coding tree in which:                a first leaf level beneath the root corresponds to a first partitioning depth level for the block CTBi for which the block CTBi has been partitioned a first time into a plurality of coding blocks,        a second leaf level beneath the first leaf level corresponds to a second partitioning depth level for block CTBi for which some blocks from said plurality of coding blocks for the block CTBi that has been partitioned a first time are partitioned into a plurality of coding blocks, . . .        . . . a k-thleaf level beneath the k−1th leaf level that corresponds to a k-th partitioning depth level for block CTBi for which some blocks from said plurality of coding blocks for the blockCTBi partitioned k−1 times are partitioned a final time into a plurality of coding blocks.        
In an HEVC compatible coder, the iteration of the partitioning of the block CTBi is formed up to a predetermined partitioning depth level.
At the conclusion of the aforementioned successive partitioning of the block CTBi, as shown in FIG. 1, the latter is finally partitioned into a plurality of coding blocks decoded CB1, CB2, . . . , CB1, . . . , CBM where 1≤j≤M.
In reference to FIG. 1, for a given block CBj, this block is considered to be the root of a prediction and transformation tree for said block, for example of discrete cosine transform (DCT) type. The prediction tree for a given block CBj is representative of the way in which the block CBj is partitioned into a plurality of blocks, which are called prediction blocks. For a considered prediction block and as in the H.264 standard, the aforementioned HEVC standard implements prediction for pixels of said block in relation to pixels of at least one other block that belongs either to the same image (intra prediction) or to one or more preceding images in the sequence (inter prediction) that have already been decoded. Such preceding images are conventionally called reference images and are kept in memory both for the coder and for the decoder. Inter prediction is commonly called motion compensation prediction.
At the conclusion of the prediction of a considered block, a predicted block is delivered.
In accordance with the HEVC standard, during the transformation operation for a considered coding block CBj, the latter can be partitioned again into a plurality of smaller blocks TB1, TB2, . . . , TBv, . . . , TBQ (1≤v≤Q) that are called transform blocks or subblocks. Such partitioning complies with a tree structure of “quadtree” type, called “residual quadtree”, in which the leaves of the latter respectively represent the transform blocks TB1, TB2, . . . , TBv, . . . , TBQ that are obtained on various partitioning depth levels.
Each subblock TB1, TB2, . . . , TBv, . . . , TBQ contains the pixels for residue that is representative of the difference between the pixels of the considered prediction block and the pixels of the current coding block CBj.
In the example shown in FIG. 1, the prediction residue of the coding block CBj is partitioned into ten square subblocks of variable size TB1, TB2, TB3, TB4, TB5, TB6, TB7, TB8 TB9, TB10, for example. Such partitioning is shown in dots in FIG. 1.
The pixels of the prediction residue corresponding to each transform block TB1, TB2, . . . , TBv, . . . , TBQ can be signaled to the decoder as being all zero by a specific indicator called CBF (abbreviation for “Coded Block Rag”).
If this indicator reads 0, the residue is interpreted as being zero in the considered prediction block. In the example shown in FIG. 1, the prediction blocks for which this indicator reads 0 are the subblocks TB1 and TB10.
If this indicator reads 1, the residue blocks obtained are then transformed, for example using a transform of DCT (discrete cosine transform) type. In the example shown in FIG. 1, the prediction blocks for which this indicator reads 1 are the blocks TB2 to TB9.
The coefficients of each of the transform residue blocks are then quantified, for example by using uniform scalar quantification, and then coded by means of entropy coding. Such steps are well known as such.
More particularly, the quantification step uses a step of quantification that is determined on the basis of a parameter called QP (abbreviation for “Quantization Parameter”). In the HEVC standard, it is possible to have a parameter QP that is the same for all the transform blocks of an image. To allow better adaptation to the local characteristics of the image, it is likewise possible to modify the parameter QP for each considered coding block. This modification is performed as follows. For a current block, the first transform block that actually contains a residue (CBF=1) is considered. With reference to FIG. 1, such a transform block is the block TB2. A piece of modification information for the parameter QP is then transmitted to the decoder. In the HEVC standard, this information consists of a syntax element denoted QPdelta that is representative of the difference between the parameter QP of the previously coded and then decoded block, called “predicted QP”, and the parameter QP of the current block.
It should be noted that the parameter QP of a current transform block is likewise used to determine the filtering force of a deblocking filter. In a manner that is known per se, such filtering is applied to the edge of the transform blocks so as to decrease the block effects that appear in an image, in the border of the transform blocks and the coding blocks.
Particularly in the HEVC standard, various types of filtering can be considered, and the parameter QP is used to determine what type of filtering will be selected. The advantage of such an arrangement is the application of more powerful filtering when the parameter QP is high (which corresponds to high compression for the block), for example. It should be noted that even a block for which all the coefficients are zero (CBF=0), for example the block CB1 in FIG. 1, nevertheless has a QP parameter value that is predicted on the basis of the QP parameter value of the previously coded and then decoded block, such a parameter being able to be used in the case of deblocking filtering.
More particularly, the entropy coding can be implemented in an arithmetic coder called “CABAC” (“Context Adaptive Binary Arithmetic Coder”), which is introduced in the AVC compression standard (also known by the name ISO-MPEG4 part 10 and ITU-T H.264).
This entropy coder implements various concepts:                arithmetic coding: the coder, as initially descried in the document J. Rissanen and G. G. Langdon Jr, “Universal modeling and coding”, IEEE Trans. Inform. Theory, vol. IT-27, pp. 12-23, January 1981, uses, for the purposes of coding a symbol, a probability of appearance of this symbol;        adaptation to context: this involves an adaptation of the probability of appearance of the symbols to be coded. Firstly, on-the-fly learning is performed. Secondly, according to the state of the previously coded information, a specific context is used for coding. Each context has a corresponding probability of appearance that is peculiar to the symbol. By way of example, a context corresponds to a type of coded symbol (the representation of a coefficient for a residue, coding mode signaling, . . . ) according to a given configuration, or a state of the vicinity (for example the number of “intra” selected modes in the vicinity, . . . );        binarization: formation of a succession of bits of the symbols to be coded is performed. Subsequently, these various bits are successively sent to the binary entropy coder.        
Thus, this entropy coder implements, for each context used, a system of on-the-fly learning of the probabilities in relation to the symbols previously coded for the considered context.
Moreover, a plurality of coding techniques are distinguished in the HEVC standard.
According to a first coding technique, the blocks of a current image are coded and then decoded sequentially according to a lexicographical order, for example according to a line-by-line route for the blocks, of “raster scan” type, starting from the block situated at the top left of the image through to the block situated at the bottom right of the image. In the example shown in FIG. 1, the blocks CTB1 to CTBL are coded and then decoded successively.
If the current block is the first block to be coded in a considered set of consecutive blocks to be coded, for example a line of blocks, the method involves:                determination, during the entropy coding of this first current block, of symbol appearance probabilities for said first current block, said probabilities being those that have been determined for the last coded and then decoded block in the preceding line of blocks,        determination of the syntax element denoted QPdelta that is representative of the difference between a predicted QP parameter value, which is that of the last coded and then decoded block in the preceding line of blocks, and a predetermined QP parameter value that the coder wishes to associate with the first current block.        
Such a technique obtains compression performance levels for the image that are high. However, the entropy coding and decoding of a symbol being dependent on the state of the probability that has been learned up to that point, the symbols can be decoded only in the same order as that used for coding. Typically, the decoding can then only be sequential, thus preventing parallel decoding of a plurality of symbols (for example in order to benefit from multicore architectures).
According to a second coding technique called WPP (abbreviation for “Wavefront Parallel Processing”), the blocks of a current image are grouped into a predetermined number of sets of neighboring blocks in twos. In the example shown in FIG. 1, said sets of blocks are formed by each of the lines L1 to L6 of the image IN, for example. The sets of blocks formed in this manner are coded or decoded in parallel. Of course, such coding requires the blocks respectively situated above and above on the right of the current block to be available, so as to be able to extract therefrom the data that are necessary for the prediction of said current block (values of decoded pixels allowing prediction of the pixels of the current block in intra mode, and value of the motion vectors in inter mode).
According to this second coding technique, if the current block is the first block to be coded in a considered set of consecutive blocks to be coded, for example a line of blocks, the method involves:                determination, during the entropy coding of this first current block, of symbol appearance probabilities for said first current block, said probabilities being those that have been determined at the conclusion of the coding and decoding of the second block in the preceding line of blocks,        determination of the syntax element denoted QPdelta that is representative of the difference between a predicted QP parameter value, which is a predetermined value called QPslice, and a predetermined QP parameter value that the coder is intended to associate with the first current block.        
In this way, it is possible to start the coding of a current line of blocks without waiting for the first block in the preceding line to be coded and then decoded. Such an arrangement has the advantage of speeding up the processing time of the coder/decoder and of benefiting from a multiplatform architecture for the coding/decoding of an image. However, the compression performance levels obtained according to this second technique are not optimum taking account of the fact that the learning of the probabilities of the CABACentropy coder is made slower on account of the initialization of the probabilities at the beginning of a line.
Moreover, the documents mentioned at the address http://phenix.int-evry.fr/jct/doc_end_user/documents/10_Stockholm/wg11/JCTVC-J0032-v3.zip, propose either converting an image in which the blocks have been coded in WPP mode into an image in which the blocks have been coded in sequential mode or, conversely, converting an image in which the blocks have been coded in sequential mode into an image in which the blocks have been coded in WPP mode.
Conversion from WPP mode into sequential mode can allow an improvement in the compression performance levels at the expense of a loss of capacity for coding/decoding the lines in parallel. The conversion from sequential mode to WPP mode can allow “parallelization” of a stream of coded blocks that would have been received but that would not have been encoded in WPP mode, all this at the expense of a slight loss of compression efficiency.
In the case of conversion from WPP mode to sequential mode, for example, the method first of all involves entropy decoding of the blocks of the image that have been coded in WPP mode. The method then involves entropy reencoding of said entropically decoded blocks, in comparison with sequential mode.
A disadvantage of conversion of the aforementioned type is that it works only when the parameter QP is constant in the image, that is to say that the value of the parameter QP is identical for each block of the image.
The reason is that when the parameter QP varies from one block to another in a considered stream of coded blocks, it is necessary, in addition to these aforementioned steps of entropy decoding and entropy reencoding, to decode the syntax elements QPdelta that are representative of the variations of the parameter QP, and then to modify the value of these syntax elements so that the parameter QP of each block is identical in WPP mode and in sequential mode. Such an arrangement proves necessary particularly owing to the fact that the first block in a line inherits a parameter QP that is not the same according to whether the coder is working in sequential mode or in WPP mode, as has been described above.
Moreover, supposing that the first block in a line does not have a residue (CBP=0), the decoding of the syntax element QPdelta will not be able to be performed because this syntax element will not have been transmitted. Thus, the case may therefore arise that the parameter QP of the first block in a line is different in WPP mode and non-WPP mode, which gives rise to deblocking filtering along the border of this first block that is different according to whether the coder is working in sequential mode or in WPP mode. Therefore, said first block will be decoded differently in sequential mode and in WPP mode, which will make the image impossible to decode, since the first coded block in a line will not be the one expected. The values of the pixels of said first coded block being likely to be reused for the intra prediction of the following blocks, but being different from the expected values, it is therefore the whole process of decoding the image that will be erroneous.