The present invention relates to an error correcting scheme for random errors occurred on channels and, in particular to sequential decoding of convolutionally encoded data, or a convolutional code.
A block code and a convolutional code are well-known as forward error correcting (FEC) codes used on channels with random noise such as satellite communication channels. The decoding error rate of the convolutional code decreases exponentially as its constraint length (K) increases. Generally speaking, the error-correcting capability of convolutional code is superior to that of the block code if an apparatus for decoder of the convolutional code is equivalent to that for the block code in terms of the scale of hardware. Therefore, application of the convolutional code to digital satellite communication channels is now actively performed.
Viterbi decoding and sequential decoding are known as decoding methods of the convolutional code, and especially the speeding up of each decoding has been studied. Viterbi decoding, (which is also called maximum likelihood decoding) has a high error-correcting capability and is thus used in various kinds of fields. However, the use of Viterbi decoding is limited to the convolutional code with its constraint length (K) relatively short (K.ltoreq.8) because the scale of hardware for Viterbi decoding increases exponentially with the increase in the constraint length of the code.
On the other hand, sequential decoding is a decoding method with simplifies the maximum likelihood decoding method and can thus be implemented with a relatively small scale of hardware. In view of this matter, sequential decoding can be called quasi-maximum likelihood decoding. Though the error-correcting capability of sequential decoding is somewhat inferior to that of Viterbi decoding when convolutional codes have the same constraint length, the scale of an apparatus for sequential decoding has a linear relationship to the increase in the constrain length of the code, approximately. Therefore, sequential decoding makes it possible to configure a decoder which can handle even a code having a relatively long constraint length and which has a higher capability of error-correcting than the Viterbi decoder. However, the number of calculations necessary for decoding considerably varies depending on channel quality, while it is almost independent of the constraint length of the code. This means that under a very noisy channel condition, it is difficult to operate sequential decoder at a high speed.
As is well-known, there are provided two types of decoding algorithm as conventional methods of sequential decoding; one is a Fano algorithm and the other is a stack algorithm (which is also called Zigangirov-Jelinek algorithm: abbreviated as Z-J algorithm hereinafter). The error-correcting capability of the Fano algorithm is generally equivalent to that of the latter under the same channel condition. However, hardware for the stack algorithm is somewhat complicated as compared with that for the Fano algorithm, because a relatively large scale of memory is necessary for stacking. In view of this point, most sequential decoders which have been practically utilized are based on the Fano algorithm.
Nevertheless, the stack algorithm is very attractive. This is why it is very simple as compared with the Fano algorithm. The Fano algorithm is very complicated and there is no room for improvement. In terms of the average number of calculations expended on decoding of convolutional code, (which is an important factor to validate the properties of the sequential decoder), the stack algorithm is superior to the Fano algorithm, for considerably noisy channels. In practice, the stack algorithm is generally faster than the Fano algorithm for a channel having less than 8 dB of the ratio of signal power per information bit to noise power density (Eb/No).
The description will firstly be given of convolutional coding of a sequence of information bits for the sake of understand of a conventional stack algorithm, which will be explained later.
FIG. 1 illustrates a configuration of a coder in the case that the constraint length (K) of the code is equal to 3, the number (v) of bits composed of a branch of code word, is equal to 2 and the coding rate (r) is thus equal to 1/2. A sequence of information bits generated by an information source (not shown) is supplied via an input terminal 1 to a shift register 2 composed of two serially-connected one-symbol delay elements 2a and 2b. The input terminals of each of modulo-2 adders 3 and 4 are connected to the delay element 2a and 2b as shown in FIG.1. Output terminals 5 and 6 of the coder are connected to the outputs of the adders 3 and 4, respectively. As will be apparent from FIG. 1, one bit is output at each of the terminals 5 and 6 every time one information bit is supplied to the input terminal 1. In other words, one information bit is encoded into a 2-bit code word.
FIG. 2 illustrates a binary tree structure of code word sequences possibly generated by the coder of FIG. 1. In this figure, a circle indicates a node, and a numeral inserted into the circle indicates a node number associated with the node. Furthermore, a line connecting two neighbor nodes together is called a branch, and a numeral attached to a branch denotes a code word composed of two digits. Moreover, a numeral in a set of parentheses represents a state of the shift register 2.
At the start of the coding process, the coder is specified to be "located" at the root node 0 of the tree. In this case, the state of the shift register 2 is initialized to (00). If the first bit of the information sequence which is supplied to the input terminals 1 is a 0, the coder follows the upper branch out of the root node 0 to a next node 1. In this case, the coder generates a code word (00) at the output terminals 5 and 6, and the state of the shift register 2 is (00). On the contrary, if the first bit is a 1, the coder follows the lower branch out of the root node 0 to a next node 2, which is located at the same tree level as the node 1. In this case, the coder generates a code word (11) and the state of the shift register is changed from (00) to (10). In this manner, the coder follows branches in response to consecutive information bits. Therefore, a code word sequence generated in response to an information bit sequence corresponds to any one of the paths through the tree. For instance, if an information bit sequence is (0,1,1,0), the coder follows a path indicated by the thick path and generates a code word sequence (00,11,01,01). The information bit sequence is divided into several blocks for the convenience of decoding. In the above example, a block is composed of four information bits and accordingly the tree has four tree levels. Each block is followed by (K-1) Os (in the above example, two zero-bits) in order to terminate the block data and initiallize the shift register 2 for the next coming block data. The code word sequence thus generated is sent through a channel to a receive side.
A decoder using a conventional stack algorithm determines the most likely path in the tree specified by the coder. For this purpose, the decoder proceeds to reconstruct paths through the tree in response to a received bit sequence by calculating correlation index (likelihood or metric) between the received sequence and paths through the tree. Then, it puts out, as a decoded sequence, one code word sequence which has the largest correlation.
The description will be given of a detail of a conventional stack algorithm, referring to FIG. 3.
(1) Firstly, insert the root node 0 into a stack and place it at the top position of the stack (block 101 in FIG. 3).
(2) Calculate the likelihood of each of two branches likelihood to the likelihood of the node which is positioned at the top of the stack to obtain the likelihoods of the two nodes 1 and 2 (block 102).
(3) Eliminate from the stack the node positioned at its top (block 103).
(4) Insert the two nodes obtained by the above item (2) into the stack (block 104).
(5) Arrange the nodes in the stack in decreasing order of the likelihood of the node (block 105).
(6) Repeat the operations of blocks 102-105 with regard to a node which has newly been placed at the top, and then terminate the decoding operation when a node belonging to the last or deepest tree level is placed at the top (block 106).
(7) Output a path leading from the root node to the node at the top of the stack (block 107).
In the above stack algorithm, two branches are extended out of a node positioned at the top of the stack every time the operation of block 102 is carried out (the operation of extending branches is termed "extension" hereafter). Then, the likelihood of each of the two branchs (which is referred to as a branch metric hereafter) is calculated. In this case, the Hamming distance between the received sequence and each bit sequence associated with the respective branches is calculated, and the branch metric of each branch is determined in accordance with the calculated Hamming distance. The values of branch metrics are selected such that if decoding proceeds along a correct path, the likelihood of the path (which is referred to as a path metric hereafter, which corresponds to the sum of the branch metrics of branches forming the path) should be increased and, on the other hand, if decoding proceeds along an incorrect path, its path metric should be decreased. By way of example, the branch metrics 1, -4 and -9 are given corresponding to the Hamming distances 0, 1 and 2, respectively as shown in FIG. 2.
The description will now be given of calculation of path metrics and process of the decoding algorithm based on the stack algorithm, referring to an example of a tree structure of FIG. 4 where the constraint length K=3 and the coding rate r=1/2. Note that in this figure, a numeral under a branch denotes a branch metric and a numeral in ( ) attached to each node denotes its path metric.
Assuming that a code word sequence sent on a channel is (00,11,01,01) as shown in FIG. 2, and that a received code word sequence is changed to (01,11,00,01) because of noise on the channel.
(1) Firstly, insert the root node 0 into the stack.
(pb 2) Calculate the Hamming distance between the first received sequence (01) and each of the code words (00) and (11) on the branches diverging from the root node 0. In this case, since both the Hamming distances are equal to 1, the branch metric -4 is given to each branch. Thus, the path metrics of the nodes 1 and 2 are equally -4.
(3) Eliminate the node 0 positioned at the top of the stack and insert the new nodes 1 and 2 into the stack and arrange them in decreasing order of their path metrics. In this example, the nodes 1 and 2 have the same path metric. In this case, assuming that one node whose node number is larger than the other has priority, the node 2 is placed at the top of the stack.
(4) Beginning with the node 2, repeat extension, calculation of path metrics and arrangement in decreasing order. Those processes correspond to the blocks 102 to 105 in FIG. 3 (a sequence of blocks 102 to 105 is defined as a step). Every time a step is executed, the contents in the stack are varied as shown in Table-1 below.
TABLE 1 ______________________________________ Step Node Number in the Stack (Path Metric) ______________________________________ 1 0(0) 2 2(-4), 1(-4) 3 1(-4), 6(-8), 5(-8) 4 4(-3), 6(-8), 5(-8), 3(-13) 5 10(-7), 9(-7), 6(-8), 5(-8), 3(-13) 6 21(-6), 9(-7), 6(-8), 5(-8), 3(-13), 22(-16) (Termination of Decoding) ______________________________________
(5) Terminate extension when one node belonging to the last tree level is placed at the top of the stack(node 21, in this example). Then, output the path [00-11-01-01] following out of the root node 0 to the node 21.
As shown in Table-1, the node number and path metric of each of the nodes are stored into the stack. As will be apparent from Table 1, according to the conventional stack algorithm, the number of the stored node increases by 1 every one step.
As described in the foregoing, the information bit sequence is divided into a plurality of blocks each of which has a predetermined length and is fed to the coder. The end of each block is followed by (K-1) Os in order to reset the contents of the shift register 2 to (00) of the initial state. Correspondingly, the decoding is also decoded per block in accordance with the decoding algorithm of FIG. 3.