FIG. 7 is a block diagram showing a schematic structure of a conventional audio decoding apparatus. This audio decoding apparatus has the decoding section 1, data buffer 2, and output section 3. The decoding section 1 receives and decodes a coded digital audio data stream, such as Dolby AC-3, read from a recording medium of digital audio data, such as a DVD (Digital Video Disc), and outputs PCM audio data. The PCM audio data output from the decoding section 1 are temporarily stored in the data buffer 2 so as to cope with synchronization with image information and a fluctuation in an input bit rate of the digital audio data stream or the like. The output section 3 receives the PCM audio data from the data buffer 2 and outputs audio serial data to an D/A (digital/analog) converter or the like or output digital audio data into a digital audio interface receiver. If the digital audio data stream has multi-channels, the output section 3 outputs time series data (PCM audio data) output from the decoding section 1 into a plurality of digital/analog converters corresponding to respective channels or to a plurality of digital audio interface receivers.
FIG. 8 shows a structure of the PCM audio data output from the decoding section 1, namely, shows a data structure in the case of Dolby AC-3 6-channel output. As shown in FIG. 8, one sample data is comprised of PCM audio data of respective channels to be output at the same time. Namely, since the Dolby AC-3 6-channel adopts 6-channel output, one sample data is composed of six PCM audio data. A plurality of sample data compose an audio frame. A number of sample data (audio frame length) per one audio frame is determined by an audio decoding method, and for example in the case of Dolby AC-3, one audio frame is composed of 1536 sample data.
Incidentally, after being decoded in the decoding section 1, if the PCM audio data which are time-series data are given directly to the output section 3, there arises a problem, mentioned below. Namely, if the attribute of the PCM audio data to be given to the output section 3 changes dynamically, data output from the output section 3 cannot cope with the dynamic change of the attribute. Moreover, after transmission of the digital audio data stream is started, in the case, for example, if an error occurs and the re-synchronizing process is desired to be executed, it is necessary to initialize all the decoding section 1, the data buffer 2 and the output section 3 and to return to the initial state so as to restart the transmission.
The inventors of this invention have disclosed an audio decoding apparatus in Japanese Patent Application Laid-Open No. 2000-278136 that takes care of this problem. In this audio decoding apparatus, as shown in FIG. 9, tag data representing individual attributes are added to respective PCM audio data. As a result, the output section can cope with a dynamic change of attributes, and the re-synchronizing process can be executed accurately.
However, in case of the audio decoding apparatus disclosed in Japanese Patent Application Laid-Open No. 2000-278136, memory requirement or bus transmission requirement increases because of the additional tag data added to each of the PCM audio data. For example, if the PCM audio data are 24 bits and the tag data are 8 bits, then total PCM audio data becomes 27 Kbytes and total tag data becomes 9 Kbytes for one audio frame (1 K byte=1024 bytes). Thus, in this example, the total memory requirement and bus transmission requirement becomes 36 K bytes.