1. Field of the Invention
The present invention relates generally to a Viterbi decoder used for maximum likelihood decoding of convolutional codes.
2. Related Background Art
A Viterbi decoder is used for maximum likelihood decoding of convolutional codes, wherein from a plurality of known code sequences of possible input code sequences, a code sequence closest in code distance to that generated by a convolutional encoder is selected as a maximum likelihood code sequence (maximum likelihood path), and based on the information thus selected, decoded data are obtained. The Viterbi decoding has a high capability for correcting random errors occurring in a communication channel and allows a particularly high encoding gain to be obtained in combination with a soft decision decoding system. For instance, in a mobile communication system, in which bit errors affect communication quality considerably and which tends to be affected by interference waves easily, convolutional codes are used as error collecting codes and the Viterbi decoding is employed for decoding them.
This Viterbi decoding algorithm is described briefly. For instance, suppose a convolutional code, with a code rate R=1/2 and a constraint length K=3, whose generating polynomials are given by the following formulae. EQU G0(D)=1+D.sup.2 EQU G1(D)=1+D+D.sup.2
FIG. 9 shows a structural example of a convolutional encoder for generating such a code. In FIG. 9, information bits as input data are delayed sequentially by two delay elements 901 and 902 such as flip-flops or the like. The input data and data from the delay element 902 are added by an adder 903, which is output as an output G0. The input data and data from the delay element 901 are added by an adder 904 and then data from the adder 904 and the data from the delay element 902 are added by an adder 905, which is output as an output G1.
When the contents of the respective delay elements 901 and 902 in such an encoder are indicated as S[1] and S[0], the states S[1:0] of the encoder can include four states of (00), (01), (10), and (11). There are always two possible transition states with respect to an input.
In other words, in the case of an input "0", when a current state is (00) or (01), it is subjected to the transition to the state (00), and when the current state is (10) or (11), it is subjected to the transition to the state (01). In the case of an input "1", when the current state is (00) or (01), it is subjected to the transition to the state (10), and when the current state is (10) or (11), it is subjected to the transition to the state (11).
As a method of illustrating such state transitions, a trellis diagram is used, and such state transitions are illustrated by a trellis diagram shown in FIG. 10. In FIG. 10, solid-line arrows (branches) indicate transitions in the case of an input "0" and broken-line branches indicate transitions in the case of an input "1". The numerals provided along the respective branches are codes (G0, G1) output upon the transitions of the respective branches.
As is apparent from FIG. 10, two branches become confluent in transition to the respective states without fail. In the Viterbi decoding algorithm, a combination (a path) of maximum likelihood (most likely) branches out of the two respective branches reaching the respective states is selected and information (a path selection signal) of the path (a surviving path) thus selected is stored in a memory. The path selection is executed until a predetermined path length is obtained and then a maximum likelihood surviving path is searched (traced back) based on contents of the path selection signal memory. Based on the change of the respective maximum likelihood states obtained by the trace-back, information input to the convolutional encoder is decoded.
The Viterbi decoder for decoding convolutional codes based on such a Viterbi algorithm basically includes a branch metric calculation means, an ACS (add-compare-select) operation means, a path metric memory, a path selection signal memory, and a trace-back means. The branch metric calculation means calculates code distances (branch metrics) between input code sequences and code sequences predicted in respective branches. The ACS operation means calculates accumulated values (path metrics) of branch metrics reaching respective states and selects surviving paths. The path metric memory stores the path metrics of the respective states, and the path selection signal memory stores information of the selected paths. The trace-back means searches maximum likelihood surviving paths based on the contents of the path selection signal memory.
In the above-mentioned ACS operation means, the surviving paths in respective states are selected according to a so-called path metric transition diagram, and the path metrics of the surviving paths are calculated. This path metric transition diagram is prepared based on the trellis diagram as shown in FIG. 10.
FIGS. 11A and 11B show path metric transition diagrams when codes indicated by the trellis diagram shown in FIG. 10 are used. That is to say, in the trellis diagram shown in FIG. 10, there are two paths that become confluent in the state (00), one of which is generated by an output of a code (00) from the state (00) and the other of which generates a code (11) from the state (01). Therefore, the path metric PM00(new) in the current state (00) is expressed by one of the following two formulae: EQU PM00(new)a=PM00(old)+BM00 EQU PM00(new)b=PM01(old)+BM11
where PM00(old) and PM01(old) indicate path metrics of the preceding states and BM00 and BM11 denote branch metrics.
In other words, a maximum likelihood value of two path metrics PM00(new)a and PM00(new)b during the ACS operation is selected and the path metric thus selected is determined as the path metric PM00(new) of the current state (00).
There are two paths that become confluent in the state (10), one of which is generated by an output of a code (11) from the state (00) and the other of which generates a code (00) from the state (01). Therefore, the path metric PM10(new) in the current state is expressed by one of the following two formulae. EQU PM10(new)a=PM00(old)+BM11 EQU PM10(new)b=PM01(old)+BM00
Further, there are two paths that become confluent in the state (01), one of which is generated by an output of a code (01) from the state (10) and the other of which generates a code (10) from the state (11). Therefore, the path metric PM01(new) in the current state is expressed by one of the following two formulae. EQU PM01(new)a=PM10(old)+BM01 EQU PM01(new)b=PM11(old)+BM10
Furthermore, there are two paths that become confluent in the state (11), one of which is generated by an output of a code (10) from the state (10) and the other of which generates a code (01) from the state (11). Therefore, the path metric PM11(new) in the current state is expressed by one of the following two formulae. EQU PM11(new)a=PM10(old)+BM10 EQU PM11(new)b=PM11(old)+BM01
Based on the above, the path metric transition diagrams as shown in FIGS. 11A and 11B can be prepared.
As can be seen from the above formulae and the path metric transition diagrams shown in FIGS. 11A and 11B, the difference between PM00(new) and PM10(new) is only the correspondence between PMxx(old) and BMxx when BM11 and BM00 are added to PM00(old) and PM01(old).
Similarly, the difference between PM01(new) and PM11(new) also is only the correspondence between PMxx(old) and BMxx when BM01 and BM10 are added to PM10(old) and PM11(old).
Furthermore, the two PMxx(old) in each case described above have the relationship of continuous even and odd number states and the two PMxx(new) have the relationship of upper and lower order states in which only their most significant bits differ from each other.
These facts indicate that two PMxx(new) with the relationship of the upper and lower order states can be calculated from the two BMxx and the two PMxx(old) with the relationship of continuous even and odd number states.
Next, the following description is directed to path selection signals and respective state transitions. In the above description, the pair of path metrics to be determined as current states have the relationship that only the least significant bits of their preceding states are different from each other. Based on this, maximum likelihood path metric selection information (path selection signals) of the respective states is indicated as PS00(new), PS01(new), PS10(new) and PS11(new), and the relationships between their values and path metric selections are assumed as follows. EQU PSxx(new)=0.fwdarw.PMxx(new)a is selected. EQU PSxx(new)=1.fwdarw.PMxx(new)b is selected.
In this case, the values of respective PSxx(new) are the same as those of the least significant bits of the corresponding states at the preceding time point.
In other words, when a state at a time point n is indicated as S(n) and a path selection signal in the state Sas PS(S), the relationship between the respective states at time points n and n-1 and the path selection signal is expressed as follows.
S(n-1)={S(n), PS(S)}
In the above formula, {a, b} denotes an operation of concatenating b to the least significant bit side of a and deleting a part of the most significant bit side of a with a width corresponding to a bit width of b.
Such facts indicate that the preceding state of a certain state can be determined from the path selection signal of the certain state.
From the above formula, in the case where the maximum likelihood state at a time point n is determined when a predetermined number of path selection signals are stored, the states selected in the past can be determined from the values of the path selection signals at respective time points. This operation is referred to as "trace-back" and one example is shown in FIG. 12.
In some cases, in order to determine the maximum likelihood state at a time point n, a method may be employed in which an additional bit (a tailbit) is added to an information bit so that the state of the encoder is converged to a predetermined value (for instance, 00 or the like) at the end of the convolutional code sequence.
As described at the beginning of this specification, the state transitions in the convolutional encoder are caused by successive inputs of information bits to the delay elements. Therefore, the determination of the state transitions by trace-back to the past states means the direct determination of the information bits.
Through the above-mentioned processes, the information bits can be decoded from the received code sequence by Viterbi decoding.
In a conventional Viterbi decoder, it is known that the ACS operation is executed by time-sharing processing and a RAM (a random access memory) is used as a path metric memory for storing a path metric obtained from a result of the ACS operation. In such a Viterbi decoder, the path metric that is obtained from a result of the ACS operation and is to be used for the ACS operation at a subsequent time point is allocated to one address of the RAM. In other words, a path metric of one state is assigned to one address of one RAM.
In this case, since a new path metric is generated from path metrics of two past states, it is required to access the memory three times per ACS operation to read out two past path metrics and write one new path metric. Even when using a so-called dual port RAM to operate writing and readout separately and concurrently, the readout is required twice. Therefore, many times of accessing memory are necessary and a high speed RAM is required, which hinder the reduction in power consumption and the increase in speed.
As a further improved method, JP 9-232973 A discloses a calculation method in which, based on the fact that two PMxx(new) with a relationship of upper and lower order states can be calculated from two BMxx and two PMxx(old) with a relationship of continuous even and odd number states as described above, two path metrics are read out and two path metrics are calculated to be output. In this method, two memories are provided for the upper orders and lower orders of the path metrics, respectively, and an arrangement for reading out continuous even and odd numbers of the path metrics at a time is provided, thus calculating two path metrics by one-time readout and one-time writing.
In this method, however, it is necessary to prepare two path metric memories for the ACS operation. Generally, when one memory is divided to provide memories with the same capacity, the circuit size and area increase, which causes a disadvantage for the reduction in power consumption.
Next, the following description is directed to the trace-back. In view of the fact that the preceding state of a certain state can be determined from the path selection signal of the certain state as described above, a configuration used for executing the trace-back by allowing path selection signals for n states (for example, 16 states=1 word) to be stored in one address of a memory (a path selection signal memory) storing the path selection signals is shown in FIG. 13A.
In this configuration, it is important that an address of a path selection signal memory is determined from a Viterbi state and furthermore, the next Viterbi state can be determined from a path selection signal read out from the memory. In other words, the determination of an address and memory access must be executed during one processing unit (one cycle). Generally, operation delays in the memory are indicated as in FIG. 13B. The operation delays include initially, an address delay caused by wiring indicated as a delay A and next, an output delay in the memory and a data delay caused by wiring, which are indicated as a delay C. These operation delays are rather long in general.
Therefore, the upper limit of the speed of the trace-back operation is determined as shown in FIG. 13C. In addition to the delays A and C described above, delays B and D also are included, which relate to a selector for extracting a path selection signal for one state from one word read out from the path selection signal memory. With consideration to the setup time of a shift register for storing Viterbi states, the upper limit of the operation speed is determined depending on a longer delay out of the following two delays.
(1) Delay around the path selection signal memory: Delay A+Delay C+Delay D PA1 (2) Delay around the selector: Delay B+Delay D
As can be understood from the above, the delay around the path selection signal memory is longer than the other, which limits the operation speed, thus causing a difficulty in high speed trace-back.
In order to solve this problem, there is a method in which a high speed operation is achieved using a register file as the path selection signal memory or the like. However, such a method causes an increase in circuit size and area and a disadvantage in attempting to reduce power consumption.