1. Field of the Invention
The present invention relates to a decoding device, a decoding method, and a receiving apparatus which decode received data based on likelihood information and, particularly, to a decoding device, a decoding method, and a receiving apparatus which perform a decoding process with improved efficiency.
2. Description of Related Art
In a digital communication system, error correcting codes for correcting an error which occurs on a transmission line are used. Particularly in a mobile communication system where radio field intensity varies drastically due to fading and an error is likely to occur, a high correction capability is required for error correcting codes. Turbo codes, which are one example of error correcting codes, are notable as the codes having the error correction capability which is close to the Shannon limit and employed in the W-CDMA (Wideband Code Division Multiple Access) or CDMA-2000 as the third-generation mobile communication system, for example.
FIG. 12 is a block diagram showing the structure of a typical encoding device for generating turbo codes. The encoding device 101 may be placed on the transmitting side of a communication system in order to encode information bits (systematic bits: systematic portion) U as pre-encoded data into turbo codes as parallel concatenated convolutional codes (PCCCs) and output the turbo codes to outside such as a transmission line. The turbo codes are not limited to the parallel concatenated convolutional codes and may be any codes which can be turbo-decoded, such as serial concatenated convolutional codes.
The encoding device 101 includes a first encoder 102, a second encoder 103 which serve as a systematic convolutional coder, and an interleaver 104 which interleaves (i.e. rearranges) data as shown in FIG. 12.
The first encoder 102 encodes input systematic portion U to generate redundancy bits (hereinafter as “parity bits”) 1P and outputs the parity bits 1P to outside. The interleaver 104 rearranges each bit of the input systematic portion U into a prescribed interleaved pattern to generate a systematic portion Uint and outputs the generated systematic portion Uint to the second encoder 103. The second encoder 103 encodes the systematic portion Uint to generate parity bits 2P and outputs the parity bits 2P to outside.
In sum, the encoding device 101 generates the systematic portion U, the parity bits 1P, the systematic portion Uint, and the parity bits 2P. A pair of the systematic portion U and the parity bits 1P (U, P) is called a first elemental code E, and a pair of the systematic portion Uint and the parity bits 2P (Uint, 2P) is called a second elemental code Eint.
Decoding such encoded turbo codes is called turbo decoding. In the turbo decoding process, decoding is performed repeatedly as a first decoder for decoding the first elemental code E and a second decoder for decoding the second elemental code Eint exchange external information. The number of decoders is not limited to two, and two or more stages of decoders may be used in accordance with the number of elemental codes of the turbo codes.
Specifically, in the turbo decoding process, the parity bits 1P, channel values Y1p and Ys of the systematic portion U, and a predetermined value (external information) Le2 are input to a first decoder as the first stage decoder to obtain external information Le1. An initial value of the external information Le2 is 0. In order to align the data sequence, the external information Le1 is rearranged by an interleaver into external information Linte1 to be input to a second decoder as the second stage decoder.
Further, the channel value Ys is rearranged by an interleaver into Yints. The channel value Y2P of the parity bits 2P and the interleaved Yints are input to the second decoder to obtain external information Linte2. These values are rearranged by a de-interleaver so that the data sequence is the same as the data from the first decoder to thereby obtain external information Le2 for the first decoder. This is repeated a plurality of times.
In this way, the process decodes information by repeatedly exchanging reliability information of the first and second decoders each other i.e. using reliability information (external information Le1) of the first decoder to enhance reliability information of the second decoder and using reliability information (external information Le2) of the second decoder to enhance reliability information of the first decoder. Such iterative operation is called “turbo decoding” because it resembles a turbo engine of automobiles.
In such iterative decoding, it is critical when to stop the iterative decoding while maintaining a high decoding capability in order to reduce power consumption of a decoder and decoding time. Techniques for stopping the iterative decoding using various stop conditions are disclosed in Japanese Unexamined Patent Application Publications Nos. 2004-194326, 2002-344330, and 2002-100995.
As an example of stopping the iterative decoding, a method of stopping turbo decoding using HDA (Hard-Decision Aided) is described hereinafter. The HDA is a stopping method for an iterative process which uses a rate of bits which are “0”/“1” inverted from the previous iteration for HD (Hard Decision) in each iterative process in the turbo decoding. The smaller rate of inverted bits indicates that the decoding process is coming to convergence or an end. Originally, HDA is used in order to end the iterative decoding of turbo codes by detecting that no error occurs. For example, the turbo decoding process can be ended when a value of HDA falls below a prescribed threshold.
A decoding device which uses HDA not only for the detection of convergence but also for the detection of non-convergence in the decoding process to thereby optimize the number of times of iteration of turbo decoding is disclosed in A. Taffin, “Generalized stopping criterion for iterative decoders”, IEEE Electronics Letters, 26 Jun. 2003, Vol. 39, No. 13. FIG. 13 is a block diagram which depicts a decoding device taught by Taffin. A decoding device 201 includes a first decoder 202, a second decoder 203, interleavers 204 and 205, de-interleavers 206 and 207, a hard decision section 208, and a HDA determination section 209.
The decoding device 201 receives turbo codes which are transmitted through a transmission line as received data. The received data contain the first elemental code E and the second elemental code Eint. The elemental codes E and Eint are composed of parity bits 1P and 2P and systematic bits U and Uint as described earlier. The systematic bits Uint of the second elemental code Eint can be obtained by inverleaving the systematic bits U of the first elemental code E. Thus, the actually transmitted data contain the systematic bits U and the parity bits 1P of the first elemental code E and the parity bits 2P of the second elemental code Eint.
The first decoder 202 and the second decoder 203 perform iterative decoding of the received data by soft-input soft-output decoding. SOVA (Soft-Output Viterbi Algorithm) and MAP (Maximum A Posteriori) are known as the soft-input soft-output decoding.
The first decoder 202 receives the received first elemental code E (first parity Y1p, systematic bits Ys) and external information Le2, performs decoding, and outputs first external information Le1. The interleaver (int) 204 interleaves the first external information Le1 to generate interleaved first external information Linte1. At the same time, the interleaver 205 interleaves the systematic bits Ys to generate interleaved systematic bits Yints, which is then supplied to the second decoder 203.
The second decoder 203 receives the interleaved first external information Linte1, the received second parity Y2p, and the interleaved systematic bits yints, performs decoding, and outputs second external information Linte2. The second external information Linte1 is then de-interleaved by the de-interleaver 206 and supplied to the first decoder 202. The first decoder 202 decodes the second external information Le2. The above process is repeated subsequently. One-time iterative decoding ends upon completion of the decoding process in the first decoder 202 and the second decoder 203.
The second decoder 203 calculates logarithmic likelihood ratio Lint2 and outputs it to the de-interleaver 207. The de-interleaver 207 de-interleaves the logarithmic likelihood ratio Lint2 into logarithmic likelihood ratio L2, and then the hard decision section 208 determines a hard decision result. In the decoding device 201, the hard decision result is supplied to the HDA determination section 209, which then determines whether or not to stop the iterative process of turbo decoding.
The HDA determination section 209 compares the BER which is calculated by HDA and the information length rate with thresholds and determines convergence/non-convergence of turbo codes to thereby optimize the iterative number of turbo decoding.
FIG. 14 is a flowchart showing a decoding method in the decoding device 201. As shown in FIG. 14, the number of times of iterative decoding to be performed in the first decoder 202 and the second decoder 203, which is referred to hereinafter as the iterative number, is set to 1, and an upper limit of the iterative number is set to 8 (Step S101). Then, the first decoder 202 performs decoding to generate first external information Le1 (Step S102). The interleaver 204 interleaves the first external information Le1 and supplies the result to the second decoder 203. At the same time, the interleaver 205 interleaves the systematic bits Ys and supplies the result to the second decoder 203. Although the first decoder 202 also generates a logarithmic likelihood ratio (LLR) L1, it is not used herein. An initial value of an input Le2 to the first decoder 202 is 0.
The second decoder 203 outputs the interleaved second external information Linte2 and the interleaved logarithmic likelihood ratio Lint2. The interleaved second external information Linte2 is de-interleaved by the de-interleaver 206 into second external information Le2 to be input to the first decoder 202. The interleaved logarithmic likelihood ratio Lint2 is de-interleaved by the de-interleaver 207 into logarithmic likelihood ratio L2 and then input to the hard decision section 208 where a hard decision result is generated.
After that, it is determined whether the iterative number exceeds 1 or not (Step S104) and, if the iterative number is equal to or more than 2, a determination value Δ0 is calculated as the following Expression 1 (Step S105).
                              Δ          0                =                              1            N                    ⁢                                    ∑                              k                =                0                                            N                -                1                                      ⁢                                                                                                                    u                      ^                                        k                                    ⁡                                      (                                          L                      2                      i                                        )                                                  -                                                                            u                      ^                                        k                                    ⁡                                      (                                          L                      2                                              i                        -                        1                                                              )                                                                                                                        (        1        )            where k indicates a bit identifier in block, i indicates an iterative number, N indicates a block length, û( ) indicates a hard decision result, L2 indicates a logarithmic likelihood ratio LLR of the second decoder, and Δ0 indicates a ratio of differences in hard decision results.
Thus, the determination value Δ0 indicates a ratio of differences between a hard decision result of the current logarithmic likelihood ratio L2 and a hard decision result of the previous logarithmic likelihood ratio L2 for each bit. If the determination value Δ0 approximates 0, the current and previous decoding results are equal. If the determination value Δ0 approximates 1, the current and previous decoding results differ largely.
If the determination value Δ0 is larger than a convergence determining threshold ηconv or smaller than a non-convergence determining threshold ηnon-conv (No in Step S106), the iterative decoding is performed until the iterative number reaches MAX=8. Thus, the process determines whether or not the iterative number reaches MAX=8 (Step S107) and, if it is less than 8, it increments the iterative number (Step S108) and repeats the procedure from Step S102.
For the determination of the determination value Δ0 in Step S106, when the iterative number is 2 or above, the determination value Δ0 is calculated by Expression 1 using a hard decision result û. Then, a stopping criteria is determined from the convergence determining threshold ηconv and the non-convergence determining threshold ηnon-conv. If the determination results in that either of convergence/non-convergence condition is satisfied, the iterative decoding is stopped.
Advantages of such a conventional decoding device which controls the stopping of the iterative decoding using the non-convergence determination in addition to the convergence determination are described below. FIG. 15 is a graph showing the relationship between a noise and an error rate. The vertical axis indicates a block error rate (BLER) and a bit error rate (BER). The horizontal axis indicates a signal-to-noise power density ratio Eb/N0 (dB). FIG. 16 is a view showing the relationship of the iterative number with respect to a signal-to-noise power density ratio.
In FIGS. 15 and 16, the signal-to-noise power density ratio Eb/N0 (dB) in the horizontal axis is such that a larger value indicates a smaller noise. The BLER in the vertical axis is calculated by counting a block which contains one or more bit error as an error, and BER indicates a value (%) that is a result of dividing the number of error bits by 954 bits (encoding block size).
In FIG. 16, (OFF, OFF) indicates the case where convergence or non-convergence is not performed, and (2%, OFF) indicates the case where the iterative stop control is performed only by the convergence determination with the convergence determining threshold ηconv=2%. (OFF, 20%) indicates the case where the iterative stop control is performed only by the non-convergence determination with the non-convergence determining threshold ηnon-conv=20%, and (2%, 20%) indicates the case where iterative stop control is performed by the convergence/non-convergence determination with the convergence determining threshold ηconv=2% and the non-convergence determining threshold ηnon-conv=20%.
The simulation conditions are as follows. The transmitting-side conditions involve the Rate 1/3(15, 13)8 PCCC turbo encoder, the 2-stage rate-matching, and the parallel bit level channel interleaver. The modulation method is the 16QAM constellation and the conventional Gray mapping. The code block size is fixed to 954 bits (938 systematic bits and 16 CRC bits). The parity bits are ½ punctured. Thus, 1920 bits are transmitted at intervals of 2 ms. The fading condition is 50 km/h on a single path. The receiving-side conditions involves no RAKE synthesis due to the single path, soft-output of Max-Log-MAP, and the maximum iterative number of 8 times, and Hybrid-ARQ being invalid.
Under the above conditions, BLER and BER are smaller as a noise is lower as shown in FIG. 15. Further, the iterative number is smaller as the noise is lower as shown in FIG. 16. Where BLER=10% (Eb/N0=16 dB), an average iterative number is about 2.25. This reduces more than 70% of processing compared with the case where the iterative number is fixed to 8. With the convergence determination at a threshold of 2%, an average iterative number is about 2.8. Therefore, the stop control with convergence/non-convergence determination leads to about 20% reduction of processing compared with the stop control with convergence determination only.
Although a conventional decoding device is described above by way of illustration, various iterative control methods are proposed as described in the above-described patent documents and Taffin. FIG. 17 shows the summary of these techniques. FIG. 17 is a view depicting the iterative control methods and the number of iterations according to several related arts. The iterative control method employed in the decoding device taught by Taffin corresponds to the related art D. This iterative control method is an improved version of the iterative control method corresponding to the related art C below. The correspondence in FIG. 17 is as follows.    Related art A: Where the iterative number is fixed (e.g. 8 times)            A1: The decoding is iterated 8 times if an error cannot be corrected after completing 8 times of iteration.        A2: The decoding is iterated until reaching a fixed upper limit (8 times) even if an error is corrected before completing 8 times of iteration.            Related art B: Where the determination whether an error exists or not is performed using error detecting codes until reaching a maximum iterative number (e.g. 8 times) and the iterative process is stopped if there is no error.            B1: When there is no error detecting code, decoding is iterated 8 times if an error cannot be corrected after completing 8 times of iteration (=A1).        B2: When there is no error detecting code, decoding is iterated 8 times even if an error is corrected before completing 8 times of iteration.        B3: When there are error detecting codes, decoding is iterated 8 times if an error cannot be corrected after completing 8 times of iteration.        B4: When there are error detecting codes, iterative process is stopped if an error is corrected before completing 8 times of iteration.            Related art C: Where the determination on convergence of error correction is performed until reaching a maximum iterative number (e.g. 8 times) and the iterative decoding is stopped if it is determined that the error correction is converged.            C1: The decoding is iterated 8 times if convergence of error correction is not detected after completing 8 times of iteration.        C2: The iterative process is stopped if convergence of error correction is detected before completing 8 times of iteration.            Related art D: Where determination on non-convergence of error correction is added to the above case C, and the iterative decoding is stopped if it is determined that the error correction is not converged even after reaching a maximum iterative number.            D1: The decoding is iterated 8 times if convergence/non-convergence of error correction is not detected after completing 8 times of iteration.        D2: The iterative process is stopped if non-convergence of error correction is detected before completing 8 times of iteration.        D3: The iterative process is stopped if convergence of error correction is detected before completing 8 times of iteration.        
As described in the foregoing, the related art D corresponding to the decoding device taught by Taffin overcomes the drawback in the related art C, that is, the decoding is undesirably iterated 8 times even if the error correction is not converged even after completing 8 times of iteration (the case C1). By performing the determination on non-convergence in addition to the determination on convergence, the iterative process can be stopped if non-convergence is determined before reaching a maximum iterative number of 8 times as shown in D2 of the related art D. This prevents unnecessary iterative process. The use of convergence/non-convergence determination enables the iterative control of turbo codes to thereby optimize the iterative number.
The W-CDMA or the like is a transmission method which is standardized by 3 GPP (3rd Generation Partnership Project). The 3 GPP is working on the standardization of the third-generation mobile communication system.
HSDPA (High Speed Downlink Packet Access), which is one of high-speed packet transmission technology, is defined by the 3 GPP. The HSDPA employs adaptive modulation which comprehensively checks a varying condition of a radio wave propagation path or a change in the velocity of propagation of a radio wave in the air and automatically selects an optimum modulation order. Specifically, the low-rate QPSK (Quadrature Phase Shift Keying) is used when the receiving condition of a ratio wave is not good and the high-rate 16 QAM (16 Quadrature Amplitude Modulation) is used when the receiving condition of a ratio wave is good. Each has the strength against a noise which is opposed to the rate, and the QPSK is low rate because of a large overhead and the 16 QAM enables high-speed transmission.
According to the HSDPA, the number of transport block (TrBK) is regulated to one. At the end of the transport block TrBK, 24-bit CRC is added as an error detecting code. Further, the block size of a code block (CdBK) is regulated to 5114 bits at maximum. The code block CdBK is a unit block for turbo coding.
Accordingly, if TrBK+CRC5114, TrBK+CRC is divided into a plurality of CdBK. According to the HSDPA, there are several categories in accordance with a transfer rate, and a maximum TrBK size is predetermined depending on the category. For example, a TrBK can contain up to two code blocks CdBK in the categories 5 and 6, up to three code blocks CdBK in the categories 7 and 8, and up to six code blocks CdBK in the category 10. Consequently, a CRC as an error correcting code is added at the end of a final CdBK, and no error correcting code is added at code blocks CdBK in the middle.
Application of the above related arts A to D to the case of performing turbo-decoding on such a transport block TrBK is as follows. In terms of optimizing the iterative number, the related art A which always performs iterative decoding until reaching a maximum iterative number is not suitable. Because an error detecting code is added only to a final code block CdBK, the related art B cannot be applied. Further, because the determination on non-convergence is not performed in the related art C, iterative decoding is undesirably performed until reaching a maximum iterative number even if it is non-convergence. Thus, the application of the related art D which performs the determination on non-convergence in addition to the determination on the convergence is examined hereinafter.
FIG. 18 is a schematic view to describe a decoding method according to the HSDPA which is applied to the related art D (Taffin). FIG. 19 is a flowchart showing the decoding method according to the related art D. For simplification of description, the number of code blocks CdBK is three as shown in FIG. 19. Specifically, a transport block TrBK is composed of three code blocks, a 1st code block CdBK, a 2nd code block CdBK, and a Last code block CdBK. An error correcting code CRC is added to the end of the transport block TrBK, which is at the Last code block CdBK.
Further, in this example, the number of turbo decoder is one, and non-convergence of error correction is detected in the fourth iterative step in each code block CdBK. Because it is necessary in the transport block TrBK to perform error detection using the CRC which is added to the Last code block CdBK, the code blocks CdBK should be processed in the order of receipt. In order to determine non-convergence under the above conditions, the method according to the related art D detects non-convergence of error correction in the fourth iterative step in each code block CdBK to thereby stop the iterative process.
The method according to the related art C, under the same conditions, requires total 24 times of iterative decoding, 8 (i.e. a maximum iterative number) times for each of all the code blocks CdBK. On the other hand, the method according to the related art D requires only total 12 times of iterative decoding under the above conditions by employing the detection on non-convergence.
However, in the example shown in FIG. 18, it is obvious at the point when non-convergence is detected at the 1st code block CdBK that the CRC determination in this transport block TrBK would detect an error. In such a case, there should be no need to perform any decoding process in the subsequent code blocks CdBK. Therefore, the total 8 times of iterative decoding in the 2nd code block CdBK and the Last code block CdBK are unnecessary process.
Specifically, as shown in FIG. 19, the method of processing the transport block according to the related art D initializes the code block No=1 and the number of code blocks=3 in Step S201, and performs turbo decoding on the 1st code block CdBK (Step S202). The turbo decoding is the process shown in FIG. 14. More specifically, the process calculates a determination value Δ0 in the code block CdBK and, upon detection of convergence or non-convergence, continues to perform decoding until reaching a maximum iterative number (=8). In the above example, the process detects the non-convergence at the fourth iterative step and ends the iterative decoding of the code block CdBK. The process then proceeds to Step S203 and undesirably repeats Steps S202 and S203 until completing the final code block Last CdBK even if non-convergence is detected in Step S202.
FIGS. 20A and 20B are views to describe the drawback of the related art D. As shown in FIG. 20A, the decoding method of the related art D does not stop processing the code block CdBK until the number of decoded blocks CdBK reaches the Max number, 3. Therefore, the process undesirably performs the iterative decoding in the subsequent 2nd code block CdBK and the Last code block CdBK. Further, if either of convergence or non-convergence is not detected in any code block CdBK as shown in FIG. 20B, the process can perform processing until a maximum iterative number is reached in all the code blocks CDBK, which results in useless iterative decoding.