H.264 which is a representative video encoding standard system is cited as a lossy compression system. In the H.264, orthogonal transform such as discrete cosine transform (DCT) etc. is performed to a prediction error signal between an input image signal and a prediction image signal generated by intra-prediction or motion compensation, and further compression processing is done to the transform coefficients to create an encoded image by quantization and encoding.
For example, International Publication No. 2007/114368 (Page. 19, FIG. 11B) discloses a technique to convert an input image of an N bit depth into an image of an (N+M) bit depth larger than the N bit depth by M bits, and further convert the converted image signal into an (N+M−L) bit depth. Then, the converted image signal is stored in a frame memory. For example, when L=M, even if the bit depth is enlarged by M bits, the image signal is stored in the frame memory as an image signal of the N bit depth. Therefore, it becomes possible to prevent increase in the capacity of the frame memory. When reading out the image signal from the frame memory, the image signal of the (N+M−L) bit depth is converted into the image signal of the (N+M) bit depth.
With above-mentioned technique, when the image signal is stored in the frame memory, the image of the (N+M) bit depth is converted into the image of the (N+M−L) bit depth by a bit-shift process. For this reason, in an image with a wide dynamic range, an error may arise in the bit-shift process, and coding efficiency may fall. Moreover, although the same conversion system is used for a luminosity signal and two color-difference signals, when cutting off the bit, it is preferable to select respective processes suitable for the luminosity and the color-difference component.