1. Field of the Invention
This invention relates to a decoder for decoding an error correcting code, particularly, to a decoder for decoding a turbo code.
2. Description of Related Art
In digital communications system, an error correcting code for correcting an error occurring in a transmission line is used. Particularly in mobile communication systems where the radio field intensity varies drastically due to fading and thereby an error is likely to occur, high correction capability is required for error correcting codes. Turbo codes, which are one example of error correcting codes, are notable as the codes having the error correction capability which is close to the Shannon limit and employed in the W-CDMA (Wideband Code Division Multiple Access) or CDMA-2000 as the third-generation mobile communication system, for example. This is disclosed in Japanese Unexamined Patent Application Publications No. 2004-15285.
FIG. 12 is a block diagram showing the structure of a typical encoding device for generating turbo codes. The encoding device 101 may be placed on the transmitting side of a communication system in order to encode information bits (systematic bits: systematic portion) U as pre-encoded data into turbo codes as parallel concatenated convolutional codes (PCCCs) and output the turbo codes to outside such as a transmission line. The turbo codes are not limited to the parallel concatenated convolutional codes and may be any codes which can be turbo-decoded, such as serial concatenated convolutional codes.
The encoding device 101 includes a first encoder 102, a second encoder 103 which serves as a systematic convolutional coder, and an interleaver 104 which interleaves (i.e. rearranges) data as shown in FIG. 12.
The first encoder 102 encodes input systematic portion U to generate redundancy bits (hereinafter as “parity bits”) P and outputs the parity bits P to outside. The interleaver 104 rearranges each bit of the input systematic portion U into a prescribed interleaved pattern to generate a systematic portion Ub and outputs the generated systematic portion Ub to the second encoder 103. The second encoder 103 encodes the systematic portion Ub to generate parity bits Pb and outputs the parity bits Pb to outside.
In sum, the encoding device 101 generates the systematic portion U, the parity bits P, the systematic portion Ub, and the parity bits Pb. A pair of the systematic portion U and the parity bits P (U, P) is called a first elemental code E1, and a pair of the systematic portion Ub and the parity bits Pb (Ub, Pb) is called a second elemental code E2.
The turbo decoding has two features of (1) using a plurality of systematic encoders having a relatively simple and small structure, and (2) each encoder being connected to the information bits as an input to the encoder through the interleaver (rearranging element).
The feature (2) aims to generate different codeword sequences in different encoders by inputting the information bits with rearranged sequences to the encoder. The decoded result of each codeword is thus complemented between the codewords in the decoding side to thereby improve the error correction capability.
The feature (1) aims to use information bits for mutual complementation of decoded results between codewords. For example, the 3GPP (3rd Generation Partnership Project) mandates the use of two 8-state Systematic Convolutional Coders as the feature (1). The 3GPP is working on the standardization of the third-generation mobile communication system such as W-CDMA.
A pair of outputs {U, P} of an encoder 1 in FIG. 12 is called a first elemental code, and the other pair of outputs {Ub, Pb} is called a second elemental code. The bit Ub is not actually output, and three bits of U, P, and Pb are output to the subsequent stage. Although termination bits are actually output at the same time, they are ignored for simplification of the description. On this account, the coding rate of turbo codes defined by the 3GPP standard is ⅓.
Decoding such encoded turbo codes is called turbo decoding. In the turbo decoding process, decoding is performed repeatedly as a first decoding unit for decoding the first elemental code E1 and a second decoding unit for decoding the second elemental code E2 exchange external information. The number of decoding units is not limited to two, and two or more stages of decoders may be used in accordance with the number of elemental codes of the turbo codes.
FIG. 13 shows a typical decoding device for turbo decoding. The turbo decoding has one feature of (1) iterating the processing as exchanging the extrinsic information among a plurality of elemental codes.
As shown in FIG. 13, a typical decoding device 201 includes a first decoding unit 202, a second decoding unit 203, an interleaved memory 204, a de-interleaved memory 205, and a hard decision/CRC decision section 206. As shown in FIG. 13, the first decoding unit and the second decoding unit each has a plurality of decoders (turbo decoder) A-D. A plurality of decoders is used to perform a parallel processing. The turbo codes are divided into a plurality of sub blocks, and a plurality of decoders perform a parallel processing. In the following description, a turbo decoding process in the decoding device 201 is described, and parallel processing is described later.
The turbo decoding process in the decoding device 201 having such a configuration includes the following steps.    (A) Reading extrinsic information of the second decoding unit 203 from the de-interleaved memory 205 and inputting the extrinsic information and a first elemental code to the first decoding unit 202. Then, outputting extrinsic information from the first decoding unit 202 and writing it to the interleaved memory 204.    (B) Reading the extrinsic information of the first decoding unit 202 from the interleaved memory 204 and inputting the extrinsic information and a second elemental code to the second decoding unit 203. Then, outputting extrinsic information from the second decoding 203 and writing it to the de-interleaved memory 205.    (C) In the final iteration of the decoding process, reading a log likelihood ratio LLR of the second decoding unit 203 from the de-interleaved memory 205, making the hard decision in the hard decision/CRC decision section 206, and finally performing error checking by CRC.
In the turbo decoding process, the step (A) is performed first. The extrinsic information from the second decoding unit 203 is an initial value (=0) in this step. Then, the step (B) is performed and further the step (A) is performed again. Subsequently, the steps (B) and (A) are iterated an arbitrary number of times. In the final iteration, the step (B) is performed. At this step, the second decoding 203 outputs the log likelihood ratio rather than the extrinsic information. After that, the step (C) is performed finally.
Because the turbo codes are systematic bits, the information bits U are contained in a received sequence. The extrinsic information is a value (priori value) indicating the likelihood of “0” (equivalent with the likelihood of “1”), which is predefined for the information bits U prior to the decoding. The turbo decoding is the process that exchanges (mutually complements) the probability that each information bit is “0” in the decoding between the first and second elemental codes to thereby improve the accuracy of the probability and enhance the error correction capability.
In the above described turbo decoding process, interleaving and de-interleaving are performed as follows. FIGS. 14 and 15 are drawings that show interleaving and de-interleaving. FIG. 14 shows relationships among the first decoding unit 202 (precisely, each decoder of the first decoding unit), the second decoding unit 203 (each decoder of the second decoding unit), the interleaved memory 204 and de-interleaved memory 205. FIG. 15 shows access directions in memory spaces of the interleaved memory 204 and the de-interleaved memory 205. An access direction is different between the first decoding unit 202 (each decoder of the first decoding unit) and the second decoding unit 203 (each decoder of the second decoding unit).
The first decoding unit 202 outputs the extrinsic information to the interleaved memory 204. The first decoding unit 202 performs a sequential access to the interleaved memory 204. In this specification, the sequential access means an access along row direction in the memory space arranged as a matrix. That is, the extrinsic information is written in the interleaved memory 204 along row direction (See FIG. 15).
The second decoding unit 203 performs an interleaved access to the interleaved memory 204. In this specification, the interleaved access means an access along column direction to the memory space. That is, the extrinsic information written in the interleaved memory 204 is read along column direction (See FIG. 15). In FIG. 15, the interleaved access that read data from a bottom of the memory space to a top is shown.
Interleaving is performed by the above described interleaved access, and the second decoding unit processes the interleaved extrinsic information.
The second decoding unit 203 outputs the extrinsic information to the de-interleaved memory 205. The second decoding unit 203 performs the interleaved access to the de-interleaved memory 205. That is, the extrinsic information is written in the de-interleaved memory 205 along column direction (See FIG. 15).
The first decoding unit 202 performs the sequential access to the de-interleaved memory 205. That is, the extrinsic information written in the de-interleaved memory 205 is read along row direction (See FIG. 15). Therefore, the first decoding unit 202 read out the de-interleaved extrinsic information.
In one row line or one column line, data exchanging is performed in actual interleaving as shown numbers of in FIG. 16, description of complex interleaving is omitted for simplification.
As described above, parallel processing is performed in decoding. Parallel processing is described as follows. In turbo decoding, an input code block is divided into a plurality of sub blocks. A plurality of decoders (turbo decoders) process a plurality of sub blocks in parallel. In detail, each decoder decodes a sub block by a unit called window. This decoding process in each decoder is described later.
When sub blocks are accessed in parallel, a plurality of decoders in the second decoding unit accesses a plurality of columns of the interleaved memory. FIG. 16 shows an example of access of the plurality of decoders. In the example of FIG. 16, first and second columns of a memory space are accessed by a decoder A, third and fourth columns are accessed by a decoder B, fifth and sixth columns are accessed by a decoder C and seventh and eighth columns are accessed by the decoder C. In this example, a plurality of decoders access the fourth row at once at T1 as shown in FIG. 17. A memory bank of the interleaved memory is arranged along row direction. Therefore, above described operation of decoders causes a problem of access collision. Even if data shown as 31-37 in FIG. 16 is empty and the decoders do not access empty portion, a plurality of decoders simultaneously access same memory bank (third row in FIG. 19) as shown in FIGS. 19 to 21.
As described above, a turbo decoding device decodes a plurality of sub-blocks in parallel. When a plurality of decoders access a same memory bank in parallel decoding, processing speed becomes slow.