In recent years, devices which handle image information as digital information, and which, in this case, is compliant with a standard of MPEG (Moving Picture Experts Group) of performing compression by an orthogonal transform such as discrete cosine transform or motion compensation using redundancy specific to image information to transmit and accumulate high-efficiency information are spreading to distribute information from broadcasting stations and receive information at houses.
Particularly, MPEG2 (ISO (International Organization for Standardization)/IEC (International Electrotechnical Commission) 13818-2) is defined as a general-purpose encoding method, and is currently used in a wide range of applications for professional use and consumer use according to a standard which covers both of an interlaced image and a progressive image, and a standard resolution image and a high definition image. According to the MPEG2 compression standard, by assigning a bit rate of 4 to 8 Mbps to an interlaced image having a standard resolution of 720×480 pixels, for example, and assigning a bit rate of 18 to 22 Mbps to a high-resolution interlaced image having 1920×1088 pixels, high compression rates and excellent image quality can be realized.
Although MPEG2 targets at high image quality encoding which mainly matches with broadcasting, MPEG2 does not support a lower bit rate than that of MPEG1, that is, an encoding standard of a higher compression rate. As mobile terminals spread, needs for such an encoding standard were expected to increase in near future, and a MPEG4 encoding standard was standardized to meet the needs. In December 1998, a standard of an image encoding standard was approved as ISO/IEC 14496-2 as an international standard.
Also, a standard called H.26L (ITU-T (International Telecommunication Union Telecommunication Standardization Sector) Q6/16 VCEG (Video Coding Expert Group)) is being developed for encoding images to be originally used in video conferences. Compared with the conventional encoding techniques such as MPEG2 and MPEG4, H.26L requires a larger amount of calculation in encoding and decoding, but is known to achieve higher encoding efficiency. Further, as part of the MPEG4 activities, a standard for achieving even higher encoding efficiency while also adopting a function which is not supported by H.26L, based on H.26L is being currently developed as Joint Model of Enhanced-Compression Video Coding.
The standard has already been set as an international standard under the name of H.264 and MPEG-4 Part 10 (hereinafter referred to as AVC (Advanced Video Coding)) in March 2003.
However, there was a concern that this standard which provides a macro block size of 16 pixels×16 pixels is not optimal to an image frame such as UHD (Ultra High Definition; 4000 pixels×2000 pixels) which is a target next generation encoding standard.
At present, to achieve higher encoding efficiency than that of AVC, an image encoding technique called HEVC (High Efficiency Video Coding) is being developed as a standard by JCTVC (Joint Collaboration Team-Video Coding), which is a joint standardization organization of ITU-T and ISO/IEC (see, for example, Non-Patent Document 1).
According to this HEVC encoding standard, a coding unit (CU) is defined as the same operation unit as a macro block according to AVC. In this CU, the size is not fixed to 16×16 pixels unlike the macro block according to AVC, and is specified in compressed image information in each sequence.
Meanwhile, to improve encoding of a motion vector using median prediction in AVC, adaptively using one of “Temporal Predictor” and “Spatio-Temporal Predictor” in addition to “Spatial Predictor” defined in AVC and calculated by median prediction as prediction motion vector information is proposed (see, for example, Non-Patent Document 2).
In an image information encoding device, cost functions for respective blocks are calculated by using the predicted motion vector information about the respective blocks, and optimum predicted motion vector information is selected. Through the compressed image information, a flag indicating the information about which predicted motion vector information has been used is transmitted for each block.
Further, as one of motion information encoding standards, a method (hereinafter, also referred to as a “merge mode”) called Motion Partition Merging is proposed (see, for example, Non-Patent Document 3). In this method, when motion information of a relevant block is the same as motion information of surrounding blocks, only flag information is transmitted and, upon decoding, the motion information of the relevant block is reconstructed using the motion information of the surrounding blocks.
By the way, a method of dividing a picture into a plurality of slices and performing processing per slice is prepared for the image encoding standards such as above AVC and HEVC to, for example, parallelize processing. Further, entropy slices are also proposed in addition to these slices.
The entropy slice is a processing unit for an entropy encoding operation and an entropy decoding operation. That is, upon the entropy encoding operation and the entropy decoding operation, although a picture is divided into a plurality of entropy slices and is processed per entropy slice, each picture is processed without being applied this slice division upon a prediction operation.