As a method for compressing and recording a moving image, MPEG-2 Video (hereafter referred to simply as MPEG-2) (ISO/IEC 13818-2:2000 Information technique—Generic coding of moving pictures and associated audio information: Video) and H.264 (ISO/IEC 14496-10; 2004 Information technique—Coding of audiovisual targets—Part 10: Advanced Video Coding) are known. In recent years, the Joint Collaborative Team on Video Coding (JCT-VC) has been established as a collegial organization of the ITU-T and ISO/IEC. This organization advances standardization activities for the High Efficiency Video Coding (HEVC) which is a new moving image standard. For example, in a JCT-VC contribution JCTVC-A205.doc (http://wftp3.itu.int/av-arch/jctvc-site/201004_A_Dresden/), HEVC proposes an H.264-based improved technique.
With a coding method based on the orthogonal transform and quantization represented by MPEG-2, H.264, and HEVC, the coding side applies orthogonal transform and quantization to a predetermined block image to generate quantization coefficient data. In this case, the coding method controls the image quality by quantizing the image data by using an image quality control parameter which is called quantization parameter. Specifically, quantization by using a small quantization parameter value improves the image quality and increases the amount of codes, and quantization by using a large quantization parameter value degrades the image quality and reduces the amount of codes. In the coding processing, the coding method codes the image data while selecting an optimum quantization parameter value according to a target amount of codes in this way. This method is referred to as rate control with which TM5 (MPEG-2 Test Model 5 (TM5), Doc. ISO/IEC JTC1/SC29/WG11/N0400, Test Model Editing Committee, April 1993) and other various methods have been proposed. Japanese Patent Application Laid-Open No. 2001-45494 discusses a technique for determining the visual importance of an image area and controlling an image quality control parameter according to the importance.
Variable-length coding is applied to quantized quantization coefficient data to generate variable-length coding coefficient data. The quantization parameter is also coded to generate a quantization parameter code. For example, a quantization parameter generation method uses as a predictive quantization parameter a quantization parameter used to quantize a block quantized before a block subjected to quantization, and calculates a difference value between the predictive quantization parameter and a quantization parameter used to quantize the block subjected to quantization. This difference value, called Quantization Parameter Delta (QP_DELTA), is embedded in a bit stream as a quantization parameter code. The variable-length coding coefficient data and the quantization parameter code generated in this way are transmitted as a bit stream to a decoding unit via an optical disk medium or a network. The decoding side decodes the variable-length coding coefficient data and the quantization parameter code to generate quantization coefficient data and a quantization parameter, and applies inverse orthogonal transform and inverse quantization to the quantization coefficient data by using the quantization parameter to generate a decoded image.
With MPEG-2 and H.264, processing is performed in unit of a lattice block formed of 16×16 pixels formed by dividing an image called a macroblock. When the size of a block subjected to orthogonal transform is represented in pixel units, the block size is 8×8 pixels with MPEG-2, or 8×8 pixels or 4×4 pixels with H.264. Specifically, one macroblock includes a plurality of orthogonal transform blocks. Since MPEG-2 or H.264 enables controlling the quantization parameter in macroblock units (rate control), orthogonal transform blocks included in an identical macroblock are quantized based on a similar quantization parameter.
With HEVC, on the other hand, lattice blocks formed by dividing an image in lattice form are called Largest Coding Units (LCUs) each being formed of 64×64 pixels. Each LCU is divided into a plurality of smaller-sized blocks called Coding Units (CUs) by using the region quadtree structure. Each CU includes orthogonal transform blocks called Transform Units (TUs). Each TU is further divided into a plurality of smaller-sized blocks by using the region quadtree structure. Each unit of a quantization parameter has a division flag. A block having a True division flag includes four divisional blocks each being horizontally and vertically half in size of the block. A block having a False division flag does not include divisional blocks but has actual data of the block. There are various methods for determining whether a block is to be divided. As a method for determining whether a block is to be divided, Japanese Patent Application Laid-Open No. 2005-191706 discusses a technique for calculating a cost of blocks by using the Lagrange multiplier and selecting a block division method with a lowest cost.
With a quantization parameter coding method for coding as a quantization parameter code a difference value between a quantization parameter and a predictive quantization parameter of a block subjected to coding, the amount of quantization parameter codes increases with increasing absolute value of the difference value. When embedding a quantization parameter in block units in a conventional case, since there is only one method for calculating a predictive quantization parameter, the absolute value of the difference value increases, thus resulting in an unnecessarily increased amount of quantization parameter codes depending on the image coding method and image characteristics.