The present section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure.
Low-Dynamic-Range pictures (LDR pictures) are pictures whose luminance values are represented with a limited number of bits (most often 8 or 10). This limited representation does not allow correct rendering of small signal variations, in particular in dark and bright luminance ranges. In high-dynamic range pictures (HDR pictures), the signal representation is extended in order to maintain a high accuracy of the signal over its entire range. In HDR pictures, pixel values are usually represented in floating-point format (either 32 bits or 16 bits for each component, namely float or half-float), the most popular format being openEXR half-float format (16 bits per RGB component, i.e. 48 bits per pixel) or in integers with a long representation, typically at least 16 bits.
A typical approach for encoding an HDR picture is to reduce the dynamic range of the picture in order to encode the picture by means of a traditional encoding scheme (initially configured to encode LDR pictures).
According to a first approach, a tone-mapping operator is applied to the input HDR picture and the tone-mapped picture is then encoded by means of a traditional 8-10 bits-depth encoding scheme such as JPEG/JPEG200 or MPEG-2, H.264/AVC for sequences of HDR pictures (“The H.264 Advanced Video Compression Standard”, second edition, Iain E. Richardson, Wiley). Then, an inverse tone-mapping operator is applied to the decoded picture and a residual picture is calculated between the input picture and the decoded and inverse-tone-mapped picture. Finally, the residual picture is encoded by means of a second traditional 8-10 bits-depth encoder scheme.
This first approach is backward compatible in the sense that a LDR picture may be decoded and displayed by means of a traditional apparatus.
This first approach uses two encoding schemes and limits the dynamic range of the input picture to be twice the dynamic range of a traditional encoding scheme (16-20 bits). Moreover, such approach leads sometimes to a LDR picture with a weaker correlation with the input HDR picture. This leads to low predictive-coding performance of the picture or sequence of pictures.
According to a second approach, a backlight picture is determined from the luminance component of an input HDR picture. A residual picture is then obtained by dividing the input HDR picture by the backlight picture and both the backlight picture and the residual picture are directly encoded.
FIG. 1 shows an example of this second approach for encoding a HDR picture (more details for example in WO2013/102560).
In step 100, a module IC obtains the luminance component L and potentially at least one color component C(i) of a HDR picture I to be encoded. The HDR picture I may belong to a sequence of HDR pictures.
For example, when the HDR picture I belongs to the color space (X,Y,Z), the luminance component L is obtained by a transform f(.) of the component Y, e.g. L=f(Y).
When the HDR picture I belongs to the color space (R,G,B), the luminance component L is obtained, for instance in the 709 gamut, by a linear combination which is given by:L=0.2127.R+0.7152.G+0.0722.B 
In step 101, a module BAM determines a backlight picture Bal from the luminance component L of the HDR picture I.
In step 102, the data needed to determine the backlight picture Bal, output from step 101, are encoded by means of an encoder ENC2 and added in a bitstream F2 which may be stored on a local or remote memory and/or transmitted through a communication interface (e.g. to a bus or over a communication network or a broadcast network).
In step 103, a LDR picture LDR2 is obtained from a ratio between the HDR picture and the backlight picture Bal.
More precisely, the luminance component L and potentially each colour component C(i) of the picture I, obtained from the module IC, is divided by the backlight picture Bal. This division is done pixel per pixel.
For example, when the components R, G or B of the HDR picture I are expressed in the color space (R,G,B), the components RLDR2, GLDR2 and BLDR2 are obtained as follows:RLDR2=R/Bal, GLDR2=G/Bal, BLDR2=B/Bal.
For example, when the components X, Y or Z of the HDR picture I are expressed in the color space (Y,Y,Z), the components RLDR2, YLDR2 and ZLDR2 are obtained as follows:XLDR2=X/Bal YLDR2=Y/Bal ZLDR2=Z/Bal
In step 104, an operator TMO tone-maps the HDR picture I in order to get a LDR picture LDR1 having a lower dynamic range than the dynamic range of the HDR picture I.
Any specific tone-mapping operator may be used such as, for example, the tone-mapping operator defined by Reinhard may be used (Reinhard, E., Stark, M., Shirley, P., and Ferwerda, J., \Photographic tone reproduction for digital pictures,” ACM Transactions on Graphics 21 (July 2002), or Boitard, R., Bouatouch, K., Cozot, R., Thoreau, D., & Gruson, A. (2012). Temporal coherency for video tone mapping. In A. M. J. van Eijk, C. C. Davis, S. M. Hammel, & A. K. Majumdar (Eds.), Proc. SPIE 8499, Applications of Digital Picture Processing (p. 84990D-84990D-10)).
In step 105, the LDR pictures LDR1 and LDR2 are encoded by means of a predictive encoder ENC1 in at least one bitstream F1. More precisely, the LDR picture LDR1 (or LDR2) is used as a reference picture to predict the other LDR picture LDR2 (or LDR1). A residual picture is thus obtained by subtracting the prediction picture from the LDR picture and both the residual picture and the prediction picture are encoded.
The bitstream F1 may be stored on a local or remote memory and/or transmitted through a communication interface (e.g. on a bus or over a communication network or a broadcast network).
This second approach is backward compatible in the sense that a LDR picture LDR1 may be decoded and displayed by means of a traditional apparatus and the HDR picture I may also be decoded and displayed by decoding the LDR picture LDR2 and the data needed to determine a decoded version of the backlight picture Bal.
This second approach leads sometimes to a LDR picture LDR1 with a weaker correlation with the other LDR picture LDR2 because those two pictures are not obtained from the HDR picture I by using same means: one is obtained by dividing the HDR picture by the backlight picture Bal and the other one is obtained by applying a tone-mapping operator. This leads to a sparse residual content having sometimes locally important values (lighting artefacts), lowering thus the coding performance of the picture or sequence of pictures.