Data volume of digital video information is expanding due to the practical application of hi-vision broadcasts and so on. Such expansion of data volume is particularly significant in moving picture digital information, and image compression techniques for allowing efficient distribution through the media such as broadcasting or Digital Versatile Discs (DVDs) have been standardized. For example, MPEG2, which is adopted for satellite and terrestrial digital hi-vision broadcasting, is an example of such image compression techniques.
The standard for image compression techniques has been developing into a standard that allows realization of higher compression rates, accompanying improvements in information processing abilities of computers. H. 264/AVC is a standard for the next video compression techniques after MPEG2, and one of its characteristics is entropy coding (variable length coding). In entropy coding in accordance with H. 264/AVC, the following methods are provided: Context-Adaptive Binary Arithmetic Coding (CABAC) and Context-Adaptive Variable Length Coding (CAVLC).
One of the characteristics of entropy coding by CABAC is to include processing of: binarization for converting multivalue information to be coded, known as syntax, into binary data, and arithmetic coding that is performed based on the appearance probability of 0/1 per bit with reference to the context calculated with respect to the binary data converted by the binarization. In arithmetic coding by CABAC, processing is performed on a per-bit basis because the appearance probability is simultaneously updated per context, so that the operation speed is usually 1 bit/1 clock.
FIG. 1 is a block diagram showing a configuration of a conventional image coding apparatus equipped with a CABAC arithmetic coding function. A conventional image coding apparatus 10 shown in the figure is an apparatus which codes image data 12, and includes: a source coding unit 14, a control unit 18, an intermediate stream generating unit 30, a buffer 32, and a coded stream generating unit 34.
The source coding unit 14 performs predetermined processing on the image data 12, and outputs image-processed data that is the data after processing. Here, the “image data” 12 is moving picture data including luminance and color difference per pixel in a picture group. In addition, a specific example of the processing performed by the source coding unit 14 is: motion prediction, intra prediction, Discrete Cosine Transform (DCT), and quantization.
The control unit 18 controls the source coding unit 14, the intermediate stream generating unit 30, and the coded stream generating unit 34.
The intermediate stream generating unit 30 binarizes source-coded information (hereinafter, referred to as “source-coded image data”) that is outputted by the source coding unit 14. The intermediate stream generating unit 30 includes: an intermediate code generating unit 40, a header information coding unit 42, and a synthesizing unit 44.
The intermediate code generating unit 40 generates information obtained by binarizing the source-coded image data per slice (hereinafter, referred to as “intermediate code”). The header information coding unit 42 generates coded header information. The “header information” includes: Sequence Parameter Set (SPS), Picture Parameter Set (PPS), and a slice header. In addition, the term simply referred to as “header information” hereinafter means the coded header information. The synthesizing unit 44 generates an intermediate stream 50 that is a synthesis of the intermediate code and the header information.
FIG. 2 is a diagram showing conventional information included in an intermediate stream 50. As the figure shows, the intermediate stream 50 includes header information, that is, SPS 22, PPS 24, a slice header 26, and an intermediate code 52. The header information in the intermediate stream 50 is coded by the header information coding unit 42, and the intermediate code 52 is binarized information generated by the intermediate code generating unit 40.
The buffer 32 is a storage unit which holds the intermediate stream 50 outputted from the synthesizing unit 44 of the intermediate stream generating unit 30.
Meanwhile, the processing of image compression is often performed in units of what is known as a macroblock obtained by dividing a picture constituting a moving picture, that is, in units of a pixel group made up of, for example, vertical 16 by horizontal 16 pixels. For example, in performing pipeline processing, pipelines usually operate on a per-macroblock basis.
The code amount in the compression of one macroblock depends on the status of pixels included in the macroblock. Normally, the code amount is small for a macroblock including uniform pixel values, whereas the code amount is large for a macroblock including pixels having large variations.
As described earlier, the operation speed of arithmetic coding is 1 bit/1 clock. In some cases, it is not possible to process one macroblock within an average macroblock processing time (=1/number of macroblocks coded per unit time) when the code amount of the macroblock is significantly large. When such a status continues, the data with which arithmetic coding cannot be started is stored as waiting data, and thus a buffer is usually provided for holding the data waiting for arithmetic coding. In the case of H. 264/AVC, the buffer requires a capacity for holding the data equivalent to maximum 2 pictures.
The technique described in Patent Reference 1 discloses a method for asynchronously performing, in the processing for coding the inputted image data 12, variable length coding and processing other than the variable length coding, by temporarily storing in the buffer 32 an intermediate code uniquely corresponding to a variable length code and performing variable length coding on the intermediate code.
The coded stream generating unit 34 obtains the intermediate stream 50 from the buffer 32, and generates to output a coded stream 20 by performing arithmetic coding on the obtained intermediate stream 50. Here, information included in the coded stream 20 shall be described with reference to FIG. 3.
FIG. 3 is a diagram showing information included in the coded stream 20. As the figure shows, the coded stream 20 includes: header information, that is, SPS 22, PPS 24 and a slice header 26, and coded image data 28. Here, the “coded image data” 28 is information obtained by performing variable length coding on the intermediate code per slice.
The coded stream generating unit 34 includes: a stream input control unit 60, a header information decoding unit 62, a parameter information register 64, a variable length coding unit 66, and a synthesizing unit 68.
The stream input control unit 60 obtains the intermediate stream 50 from the buffer 32.
The header information decoding unit 62 decodes the intermediate stream 50 and extracts parameter information included in header information.
Here, the “parameter information” is information required for performing variable length coding on the intermediate code, out of information included in header information. Specific examples of parameter information can be: “pic width in MBs” and “pic height in MBs”, “Slice QPy”, “cabac_init_idc”, “slice_type”, and so on in H264/AVC.
A parameter information register 64 is a storage unit for temporarily holding the parameter information extracted by the header information decoding unit 62.
The variable length coding unit 66 obtains the intermediate code 52 and the parameter information held by the parameter information register 64, and performs variable length coding on the intermediate code 52.
The synthesizing unit 68 outputs the coded stream 20, which is a synthesis of the header information and the coded data on which variable length coding has been performed by the variable length coding unit 66.
Patent Reference 1: Japanese Unexamined Patent Application Publication No. 2003-259370.