1. Field of the Invention
The present invention relates to a decoding apparatus and a decoding method. More particularly, the present invention relates to a decoding apparatus capable of implementing a high decoding speed in a process to decode channel input bits from a partial-response channel output in accordance with a trellis obtained by combining a coding constraint and state transitions of a partial response for a case in which the length of a memory required for describing the coding constraint is greater than the length of a channel memory of the partial response, and relates to a decoding method adopted by the decoding apparatus.
2. Description of the Related Art
In a recording/reproduction apparatus for recording data into a recording medium such as a magnetic disk or an optical disk and reproducing data from the recording medium, a detection method equalized to a partial response is widely used. If a partial response matching the transfer characteristic of the recording/reproduction apparatus is used, the recording/reproduction apparatus is capable of suppressing noise emphases. In addition, by adoption of typically a Viterbi algorithm making use of a trellis representing state transitions of a partial response, the recording/reproduction apparatus is capable of easily carrying out a maximum likelihood decoding process on a string of channel input bits.
For example, a partial response represented by a transfer characteristic of H(D)=1−D2 is referred to as a PR4 (Partial Response Class 4), which is often utilized in a magnetic recording/reproduction apparatus. In this case, notation D denotes an operator representing a delay with a time length T corresponding 1 channel bit. Since the channel memory length is 2, the PR4 can be described in terms of state transitions having a state count of 4 (=22).
FIG. 1 is a diagram showing the state transitions of the PR4.
Each circle shown in FIG. 1 denotes a state. Notation Sij inside every circle is the name of the state represented by the circle. In the following description, the phrase ‘state Sij’ is a technical term used for describing a state the name of which is Sij. The suffixes i and j of state Sij represent values stored in a memory as the values of channel bits. That is to say, state Sij is a state in which the channel bit immediately preceding (and causing) a transition to state Sij is j and the channel bit immediately preceding the channel bit (j) immediately preceding (and causing) the transition to state Sij is i. In other words, Sij used as the state name of state Sij has suffixes (that is, i and j) appended to the character S in an order starting with a value (that is, i) stored least recently in the memory as the value of a channel bit and ending with a value (that is, j) stored just before the transition to state Sij in the memory as the value of a channel bit.
Each arrow shown in FIG. 1 as an arrow originating from a specific one of two states Sij and pointing to the other one of the two states Sij has a label x/y. The arrow represents a transition made from the specific state Sij to the other state Sij in accordance with the channel-bit input x of the label x/y. It is to be noted that notation y in the label x/y denotes an expected output value of the PR4. In the following description, an expected output value of the PR4 is referred to simply as an output expected value.
As shown in FIG. 1, if the channel-bit input x of state S00 in the PR4 is 0, a transition from state S00 to state S00 itself is made. For this transition, the output expected value is 0. If the channel-bit input x of state S00 is 1, on the other hand, a transition from state S00 to state S01 is made. For this transition, the output expected value is 1.
If the channel-bit input x of state S01 is 0, a transition from state S01 to state S10 is made. For this transition, the output expected value is 0. If the channel-bit input x of state S01 is 1, on the other hand, a transition from state S01 to state S11 is made. For this transition, the output expected value is 1.
If the channel-bit input x of state S10 is 1, a transition from state S10 to state S01 is made. For this transition, the output expected value is 0. If the channel-bit input x of state S10 is 0, on the other hand, a transition from state S10 to state S00 is made. For this transition, the output expected value is −1.
If the channel-bit input x of state S11 is 0, a transition from state S11 to state S10 is made. For this transition, the output expected value is −1. If the channel-bit input x of state S00 is 1, on the other hand, a transition from state S11 to state S11 itself is made. For this transition, the output expected value is 0.
FIG. 2 is a trellis diagram showing a trellis for a channel memory length of 2.
It is to be noted that, in the trellis diagram of FIG. 2, the state name Sij is not enclosed in a circle representing state Sij. Instead, at the left end of each row of the trellis, a state name Sij is described as the name of the same states each represented by a blank circle on the same row. That is to say, on every row of the trellis, all circles laid out in the horizontal direction have the state name Sij shown at the left end of the row. In FIG. 2, a label is omitted from each arrow representing a branch or a state transition. FIGS. 5, 6, 12, 15, 20 and 21 to be described later are shown in the same way as FIG. 2.
In the state transition shown in FIG. 2, state S00 transits to state S00 itself or state S01 in accordance with the channel bit input. By the same token, state S10 transits to state S00 or state S01 in accordance with the channel bit input. In the same way, state S01 transits to state S10 or state S11 in accordance with the channel bit input. Likewise, state S11 transits to state S11 itself or state S10 in accordance with the channel bit input.
Next, a reproduced-signal decoding method equalized to a partial response channel is explained. In accordance with a trellis, it is possible to carry out a maximum likelihood decoding process or a MAP (Maximum A Posteriori Probability) decoding process of a string of channel input bits for each bit on a reproduced signal. As a decoder for carrying out a maximum likelihood decoding process of a string of channel input bits, a Viterbi decoder is often used.
FIG. 3 is a block diagram showing the configuration of the Viterbi decoder 10.
As shown in FIG. 3, the Viterbi decoder 10 includes a BMU (Branch Metric Unit) 11, an ACSU (Add-Compare-Select Unit) 12 and an SMU (Survivor Memory Unit) 13.
The BMU 11 receives a reproduced signal of a certain time instant. On the basis of the reproduced signal, the BMU 11 finds the probability that a state transition occurs in a trellis. This probability is referred to as a branch metric or branch information (which is information on a branch). The BMU 11 supplies the branch metrics to the ACSU 12.
The ACSU 12 is a unit for selecting a most probable path leading to a state of the next time among paths leading to states of the next time and updating a path metric or path information (which is information on paths). The ACSU 12 selects the most probable path on the basis of the path metric updated in the immediately preceding path selection processing as indicated by a feedback arrow on the ACSU 12 and a branch metric received from the BMU 11. The path metric is information on the probability of each of the paths leading to states of the next time. The ACSU 12 supplies information identifying the selected most probable path to the SMU 13.
On the basis of information received from the ACSU 12 as the information identifying the most probable path, the SMU 13 accumulates inputs to a trellis corresponding to a most probable path in each state and outputs an input to a past trellis existing on a path surviving at the present time as a decoding result.
As described above, in the Viterbi decoder 10, the ACSU 12 feeds back a path metric updated in the present path selection processing to be used in the next path selection processing. Thus, the effort to increase the speed of the decoding process is limited.
FIG. 4 is a block diagram showing the configuration of a circuit 30 employed in the ACSU 12 as a circuit for updating the path metric of state s1.
It is to be noted that, in the circuit 30 shown in FIG. 4, branches merging in state s1 are branches from four states, i.e. states s1 to s4.
The circuit 30 shown in FIG. 4 includes four memories 31-1 to 31-4, four adders 32-1 to 32-4, a comparator 33 and a selector 34. It is to be noted that FIG. 4 does not show information to be supplied to the SMU 13 as the information identifying the most probable path. The information identifying the most probable path is information used for selecting one of the branches. The omission of the information identifying the most probable path applies to FIGS. 7 to 11, 13, 16, 18, 19 and 22 to 25, which will be explained later.
The memories 31-1 to 31-4 are each typically a flip-flop circuit. The memories 31-1 to 31-4 are used for storing respectively path metrics PMs1,k−1, PMs2,k−1, PMs3,k−1, and PMs4,k−1 of states s1, s2, s3 and s4 at a time k−1. The memories 31-1 to 31-4 supply the path metrics PMs1,k−1, PMs2,k−1, PMs3,k−1, and PMs4,k−1 stored therein to the adders 32-1 to 32-4 respectively.
The adders 32-1 to 32-4 add the path metrics PMs1,k−1, PMs2,k−1, PMs3,k−1 and PMs4,k−1 received from the memories 31-1 to 31-4 to respectively to branch metrics BM1,k, BM2,k, BM3,k and BM4,k of branches merging in state s1 received from the BMU 11 to result in sums. The adders 32-1 to 32-4 supply the sums to the comparator 33 and the selector 34 as path metrics.
The comparator 33 compares the path metrics received from the adders 32-1 to 32-4 with each other and supplies information identifying the path metric of a most probable path to the selector 34. For example, the comparator 33 takes the largest one among the past metrics as the path metric of a most probable path and supplies the information identifying the path metric of the most probable path to the selector 34.
On the basis of information received from the comparator 33 as the information identifying the path metric of a most probable path, the selector 34 selects the path metric of a most probable path as the path metric PMs1,k of state s1 at the time k and outputs the path metric PMs1,k to the SMU 13. The selector 34 also feeds back the path metric PMs1,k to be used in the next processing as the path metric PMs1,k−1 to the memory 31-1 to be stored therein. That is to say, the path metric PMs1,k−1 stored in the memory 31-1 as a path metric for the time k−1 is updated by being replaced with the path metric PMs1,k output by the selector 34 as a path metric for the time k.
Other circuits for states s2 to s4 can be designed into the same configuration as the circuit 30 for state s1. It is to be noted that the memories 31-1 to 31-4 can be shared by the other circuits. That is to say, the ACSU 12 has circuits designed for states s1 to s4 as circuits each having the same configuration as the circuit 30 but the ACSU 12 includes only one set of memories 31-1 to 31-4. In other words, sets of memories 31-1 to 31-4 for states s2 to s4 can be eliminated. The ACSU 12 carries out the addition, comparison and selection processes for every time instant from time to time, updating the path metrics.
As described above, the path metric of the time k is used for updating the path metric of the time k+1. Thus, the processing of the ACSU 12 needs to be completed in a time unit.
As a technique to make the processing of the ACSU 12 simple, there has been proposed a technique to change the order to add a branch metric as described in documents such as Japanese Patent Laid-open No. Hei 7-183819.
Next, a process to code channel bits is explained as follows.
In many cases, a string of channel bits to be recorded is subjected to a coding process suitable for the recording/reproduction system and, thus, a recorded string of channel bits is a string that has been subjected to the coding process prior to the recording process. In the case of an RLL (Run-Length Limited) coding process, for example, the number of consecutive bits having the same value is limited by a minimum number, a maximum number or both. A (d, k)-RLL code is a code including a string of 0 code bits between two 1 code bits. The length of the string is limited by a maximum k and a minimum d. If a (d, k)-RLL code is recorded by adoption of an NRZI (Non Return to Zero Invert) method into a recording medium, the number of consecutive channel bits having the same value has a minimum of (d+1) and a maximum of (k+1).
In the case of a 17 PP (Parity Preserve) code used in a Blu-ray disc, for example, the minimum bit count d is limited to 1 whereas the maximum bit count k is limited to 7. With the minimum bit count d limited to 1, the interval of the channel-bit polarity inversion is at least equal to a period of 2 T. In addition, in the case of the 17PP code, the number of consecutive minimum-interval polarity conversions is limited to a maximum value of 6. In the following description, the number of consecutive minimum-interval polarity conversions is referred to as a consecutive minimum-interval polarity conversion count r. In addition, in the case of an MTR (Maximum Transition Run) coding process, the number of consecutive polarity conversions is limited.
If combined with state transitions of a partial response, the channel-bit coding restrictions like the ones described above is capable of preventing a pattern prohibited by the coding restrictions from being obtained as a result of decoding. In addition, in some cases, the Euclid distance in the trellis is increased and the error-rate characteristic is improved.
FIG. 5 is a trellis diagram for coding restrictions combined with state transitions of a partial response for a channel memory length of 2 as coding restrictions for a process to record an RLL code with the minimum bit count d limited to 1 by adoption of the NRZI method.
The trellis diagram of FIG. 5 is a trellis diagram for the minimum bit count d limited to 1. Thus, the trellis shown in FIG. 5 is obtained by eliminating a branch representing a transition from state S10 to S01 and a branch representing a transition from state S01 to S10 from the trellis shown in FIG. 2.
FIG. 6 is a trellis diagram for coding restrictions combined with state transitions of a partial response for a channel memory length of 2 as coding restrictions for a process to record an RLL code with the maximum bit count k limited to 2 by adoption of the NRZI method.
In the trellis shown in FIG. 6, in order to incorporate the coding restriction limiting the maximum bit count k to 2 into the state transitions, state S00 in the trellis shown in FIG. 2 as a trellis for a channel memory length of 2 is split into states S000 and S100 whereas state S11 in the same trellis is split into states S011 and S111. That is to say, in the trellis shown in FIG. 6, the number of states is increased to 6.
In addition, in the trellis shown in FIG. 6, the maximum bit count k is limited to 2. Thus, in the trellis shown in FIG. 6, there is no branch representing a transition from state S000 caused by the 0 channel bit entering consecutively three times in the past to same state S000 due to the 0 channel bit entering again. By the same token, in the trellis shown in FIG. 6, there is no branch representing a transition from state S111 caused by the 1 channel bit entering consecutively three times in the past to same state S111 due to the 1 channel bit entering again.
FIG. 7 is a block diagram showing a typical configuration of an ACSU 50 employed in the conventional Viterbi decoder for carrying out a decoding process for each time instant in accordance with the trellis shown in FIG. 6.
To begin with, the following description explains a technique adopted by the ACSU 50 with a configuration shown in FIG. 7 to update the path metric of each state S.
In a partial response with a channel memory length of 2, there are a maximum of eight output expected values for channel-bit strings of 000, 001, 010, 011, 100, 101, 110 and ill. The BMU finds branch metrics BM0,k to BM7,k from the output expected values and a signal reproduced at a transition from the time (k−1) to the time k.
The path metric of every state S is updated by making use of the branch metrics BM0,k to BM7,k in accordance with path-metric updating equations expressed by Eqs. (1) as follows:PMS000,k=PMS100,k−1+BM0,k PMS100,k=PMS10,k−1+BM4,k PMS10,k=max(PMS01,k−1+BM2,k,PMS011,k−1+BM6,k,PMS111,k−1+BM6,k)PMS01,k=max(PMS10,k−1+BM5,k,PMS100,k−1+BM1,k,PMS000,k−1+BM1,k)PMS011,k=PMS01,k−1+BM3,k PMS111,k=PMS011,k−1+BM7,k  (1)
It is to be noted that, in the path-metric updating equations expressed by Eqs. (1), notations PMS000,k, PMS100,k, PMS10,k, PMS01,k, PMS011,k and PMS111,k denote respectively the path metrics of states S000, S100, S10, S01, S011, and S111 of the time k whereas notations PMS000,k−1, PMS100,k−1, PMS10,k−1, PMS01,k−1, PMS011,k−1 and PMS111,k−1 denote respectively the path metrics of states S000, S100, S10, S01, S011 and S111 of the time k−1.
The following description explains the configuration of the ACSU 50 for updating the path metrics in accordance with the path-metric updating equations expressed by Eqs. (1).
The ACSU 50 with a configuration shown in FIG. 7 employs memories 51-1 to 51-6, adders 52-1 to 52-10 as well as max circuits 53-1 and 53-2.
The memories 51-1 to 51-6 are memories used for storing respectively the path metrics PMS000,k−1, PMS100,k−1, PMS10,k−1, PMS01,k−1, PMS011,k−1 and PMS111,k−1 of states S000,k−1 S100,k−1, S10,k−1, S01,k−1, S011,k−1 and S111,k−1 of the time k−1. The memory 51-1 supplies the path metric PMS000,k−1 stored therein to the adder 52-1 whereas the memory 51-2 supplies the path metric PMS100,k−1 stored therein to the adders 52-2 and 52-3.
In addition, the memory 51-3 supplies the path metric PMS10,k−1 stored therein to the adders 52-4 and 52-5 whereas the memory 51-4 supplies the path metric PMS01,k−1 stored therein to the adders 52-6 and 52-7. In addition, the memory 51-5 supplies the path metric PMS011,k−1 stored therein to the adders 52-8 and 52-9 whereas the memory 51-6 supplies the path metric PMS111,k−1 stored therein to the adder 52-10.
The adder 52-1 adds the path metric PMS000,k−1 received from the memory 51-1 to the branch metric BM1,k received from the BMU and supplies the result of the addition to the max circuit 53-1.
The adder 52-2 adds the path metric PMS100,k−1 received from the memory 51-2 to the branch metric BM0,k received from the BMU to produce a path metric PMS000,k as a result of the addition. That is to say, the adder 52-2 computes the path metric PMS000,k in accordance with the first equation from the top of Eqs. (1). The path metric PMS000,k is fed back to the memory 51-1 and stored therein as the path metric PMS000,k−1.
The adder 52-3 adds the path metric PMS100,k−1 received from the memory 51-2 to the branch metric BM1,k received from the BMU and supplies the result of the addition to the max circuit 53-1.
The adder 52-4 adds the path metric PMS10,k−1 received from the memory 51-3 to the branch metric BM4,k received from the BMU to produce a path metric PMS100,k as a result of the addition. That is to say, the adder 52-4 computes the path metric PMS100,k in accordance with the second equation from the top of Eqs. (1). The path metric PMS100,k is fed back to the memory 51-2 to be stored therein as the path metric PMS100,k−1.
The adder 52-5 adds the path metric PMS10,k−1 received from the memory 51-3 to the branch metric BM5,k received from the BMU and supplies the result of the addition to the max circuit 53-1.
The adder 52-6 adds the path metric PMS01,k−1 received from the memory 51-4 to the branch metric BM2,k received from the BMU and supplies the result of the addition to the max circuit 53-2.
The adder 52-7 adds the path metric PMS01,k−1 received from the memory 51-4 to the branch metric BM3,k received from the BMU to produce a path metric PMS011,k as a result of the addition. That is to say, the adder 52-7 computes the path metric PMS011,k in accordance with the fifth equation from the top of Eqs. (1). The path metric PMS011,k is fed back to the memory 51-5 to be stored therein as the path metric PMS011,k−1. 
The adder 52-8 adds the path metric PMS011,k−1 received from the memory 51-5 to the branch metric BM6,k received from the BMU and supplies the result of the addition to the max circuit 53-2.
The adder 52-9 adds the path metric PMS011,k−1 received from the memory 51-5 to the branch metric BM7,k received from the BMU to produce a path metric PMS111,k as a result of the addition. That is to say, the adder 52-9 computes the path metric PMS111,k in accordance with the sixth equation from the top of Eqs. (1). The path metric PMS111,k is fed back to the memory 51-6 and stored therein as the path metric PMS111,k−1.
The adder 52-10 adds the path metric PMS111,k−1 received from the memory 51-6 to the branch metric BM6,k received from the BMU and supplies the result of the addition to the max circuit 53-2.
The max circuit 53-1 selects the largest value among the addition results received from the adders 52-1, 52-3 and 52-5 and takes the selected value as the path metric PMS01,k. That is to say, the max circuit 53-1 computes the path metric PMS01,k in accordance with fourth equation from the top of Eqs. (1) and feeds back the path metric PMS01,k to the memory 51-4 as the path metric PMS01,k−1.
By the same token, the max circuit 53-2 selects the largest value among the addition results received from the adders 52-6, 52-8 and 52-10 and takes the selected value as the path metric PMS01,k. That is to say, the max circuit 53-2 computes the path metric PMS01,k in accordance with third equation from the top of Eqs. (1) and feeds back the path metric PMs10-k to the memory 51-3 as the path metric PMS01,k−1.
As described above, in the ACSU 50, the max circuits 53-1 and 53-2 each need to select the largest value among the addition results received from adders. This is because the number of branches merging in the same state S is three or larger and, thus, the length of a memory for describing coding restrictions is greater than the length of a channel memory for the partial response.
As described above, if the number of branches merging in a state S increases, the number of addition results each serving as a candidate to be selected as a maximum value also increases as well. Thus, it takes long time to select one of such candidates. As a result, the processing speed of the ACSU undesirably decreases.
FIG. 8 is a block diagram showing a typical configuration of an ACSU 60 employed in the conventional Viterbi decoder for carrying out a decoding process for every two time instants in accordance with the trellis shown in FIG. 6.
To begin with, the following description explains a technique adopted by the ACSU 60 with a configuration shown in FIG. 8 to update the path metric of each state S.
In a partial response with a channel memory length of 2, there are a maximum of eight output expected values for channel-bit strings of 000, 001, 010, 011, 100, 101, 110 and 111. In the case of the configuration shown in FIG. 8, the BMU finds branch metrics BM0,k−1, BM1,k−1, BM2,k−1, BM3,k−1, BM4,k−1, BM5,k−1, BM6,k−1 and BM7,k−1 from branch metrics BM0,k, BM1,k, BM2,k, BM3,k, BM4,k, BM5,k, BM6,k and BM7,k, the output expected values as well as a signal reproduced at a transition from the time (k−2) to the time (k−1).
The path metric of every state S is updated for every two time instants by making use of the branch metrics BM0,k−1, BM1,k−1, BM2,k−1, BM3,k−1, BM4,k−1, BM5,k−1, BM6,k−1 and BM7,k−1 as well as the branch metrics BM0,k, BM1,k, BM2,k, BM3,k, BM4,k, BM5,k, BM6,k and BM7,k in accordance with the path-metric updating equations expressed by Eqs. (2) as follows:PMS000,k=PMS10,k−2+BM4,k−1+BM0,k PMS100,k=max(PMS01,k−2+BM2,k−1+BM4,k,PMS011,k−2+BM6,k−1+BM4,k,PMS111,k−2+BM6,k−1+BM4,k)PMS10,k=max(PMS10,k−2+BM5,k−1+BM2,k,PMS100,k−2+BM1,k−1+BM2,k,PMS000,k−2+BM1,k−1+BM2,k,PMS01,k−2+BM3,k−1+BM6,k,PMS011,k−2+BM7,k−1+BM6,k)PMS01,k=max(PMS01,k−2+BM2,k−1+BM5,k,PMS011,k−2+BM6,k−1+BM5,k,PMS111,k−2+BM6,k−1+BM5,k,PMS10,k−2+BM4,k−1+BM1,k,PMS100,k−2+BM0,k−1+BM1,k)PMS011,k=max(PMS10,k−2+BM5,k−1+BM3,k,PMS100,k−2+BM1,k−1+BM3,k,PMS000,k−2+BM1,k−1+BM3,k)PMS111,k=PMS01,k−2+BM3,k−1+BM7,k  (2)
The following description explains the configuration of the ACSU 60 for updating the path metrics in accordance with the path-metric updating equations expressed by Eqs. (2).
The ACSU 60 with a configuration shown in FIG. 8 employs memories 61-1 to 61-6, adders 62-1 to 62-18 and max circuits 63-1 to 63-4.
In the same way as the ACSU 50 shown in FIG. 7, in the ACSU 60 shown in FIG. 8, the adders 62-1 to 62-18 add path metrics PMs,k to branch metrics BM whereas the max circuits 63-1 to 63-4 select maximum values and feed back the selected maximum values to the memories 61-1 to 61-6.
To put it concretely, the adder 62-6 carries out a path-metric updating process based on the first equation from the top of Eqs. (2). The adders 62-10, 62-14 and 62-17 as well as the max circuit 63-1 carry out path-metric updating processes based on the second equation from the top of Eqs. (2). The adders 62-1, 62-3, 62-7, 62-11 and 62-15 as well as the max circuit 63-2 carry out path-metric updating processes based on the third equation from the top of Eqs. (2).
The adders 62-4, 62-8, 62-12, 62-16 and 62-18 as well as the max circuit 63-3 carry out path-metric updating processes based on the fourth equation from the top of Eqs. (2). The adders 62-2, 62-5 and 62-9 as well as the max circuit 63-4 carry out path-metric updating processes based on the fifth equation from the top of Eqs. (2). The adder 62-13 carries out a path-metric updating process based on the sixth equation from the top of Eqs. (2).
As described above, the max circuits 63-2 and 63-3 employed in the ACSU 60 each need to select the largest value among five addition results received from adders. Thus, also in the case of the ACSU 60, it takes long time to select the largest value.
In addition, besides the Viterbi algorithm, the decoding algorithms according to a trellis include a MAP algorithm, a Log-Map algorithm and a Max-Log-MAP algorithm. The MAP algorithm is also known as a BCJR (Bahl, Cocke, Jelineck and Raviv) algorithm. The Log-Map algorithm is an algorithm implementing the MAP algorithm in a logarithmic region. The Max-Log-Map algorithm is an approximation of the Log-Map algorithm. These other algorithms also have the problem that, the larger the number of branches merging in a state, the more complicated the processing. For more information on this, the reader is suggested to refer to non-patent reference 1 (L. R. Bahl, J. Cocke, F. Jelineck and J. Raviv, “Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate,” IEEE Transaction on Information Theory, vol. IT-20, pp. 284-287, March 1974) and non-patent reference 2 (P. Robertson, E. Villebrum and P. Hoeber, “A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms Operating in the Log Domain,” in Proc. ICC '95, pp. 1009-1013, June 1995).
FIG. 9 is a diagram showing a typical configuration of a circuit 70 employed in the conventional decoder, which is used for carrying out a decoding process for every time instant by adoption of the BCJR algorithm, as a circuit for updating the probability of each state S in accordance with the trellis shown in FIG. 6.
First of all, the following description explains equations for updating the probability of each state S by adoption of the BCJR algorithm.
In accordance with the BCJR algorithm, the probability αs,k of state S after a transition at a time k is updated in accordance with probability updating equations expressed by Eqs. (3) on the basis of the probability α′s′,k−1 of state S′ preceding the transition at a time (k−1) and a transition probability γ of the branch. In the probability updating equations expressed by Eqs. (3), the probability αs,k of state S after a transition at a time k is the path information cited earlier. The transition is a transition tracing the trellis in the positive direction of the trellis. On the other hand, the transition probability γ of the branch is the branch information mentioned before:αS000,k=αS100,k−1×γ0,k αS100,k=αS10,k−1×γ4,k αS10,k=αS01,k−1×γ2,k+αS011,k−1×γ6,k+αS111,k−1×γ6,k αS01,k=αS10,k−1×γ5,k+αS100,k−1×γ1,k+αS000,k−1×γ1,k αS011,k=αS01,k−1×γ3,k αS111,k=αS011,k−1×γ7,k  (3)
It is to be noted that the transition probabilities γ0,k to γ7,k are expressed by probability updating equations represented by Eqs. (4) as follows:γ0,k=γk(S100,S000)γ1,k=γk(S100,S01)=γk(S000,S01)γ2,k=γk(S01,S10)γ3,k=γk(S01,S011)γ4,k=γk(S10,S100)γ5,k=γk(S10,S01)γ6,k=γk(S011,S10)=γk(S111,S10)γ7,k=γk(S011,S111)  (4)
It is to be noted that notation γk (S′, S) used in the probability updating equations represented by Eqs. (4) denotes the transition probability of a branch resulting in a transition from state S′ to state S at a time k.
Eqs. (3) used as the basis to carry out the process to update the probability αS,k by adoption of the BCJR algorithm are obtained from Eqs. (2) used as the basis to carry out the process to update the path metric PMS,k by adoption of the Viterbi algorithm by:
replacing the path metric PMS,k with the probability αS,k;
replacing the branch metric BM with the transition probability γ;
replacing selection of a maximum value with addition; and
replacing addition of the path metric PMS,k to the branch metric BM with multiplication of the probability αS,k by the transition probability γ.
Thus, the circuit 70 shown in FIG. 9 has a configuration obtained from the configuration of the ACSU 50 shown in FIG. 7 by:
replacing the memories 51-1 to 51-6 of the configuration of the ACSU 50 with memories 71-1 to 71-6;
replacing the adders 52-1 to 52-10 of the configuration of the ACSU 50 with multipliers 72-1 to 72-10; and
replacing the max circuits 53-1 and 53-2 of the configuration of the ACSU 50 with sum circuits 73-1 and 73-2.
That is to say, the circuit 70 shown in FIG. 9 includes the memories 71-1 to 71-6, the multipliers 72-1 to 72-10 as well as the sum circuits 73-1 and 73-2.
The multipliers 72-1 to 72-10 receive probabilities αS,k−1 from the memories 71-1 to 71-6 and the transition probabilities γ from an external source. The multipliers 72-1 to 72-10 each multiply the probability αS,k by the transition probability γ and output a product obtained as a result of the multiplication. Some of the products are fed back to the memories 71-1, 71-2, 71-5 and 71-6 to be stored therein as the probability αS,k−1. Instead of being fed back to the memories 71-1, 71-2, 71-5 and 71-6, the remaining products are supplied to the sum circuits 73-1 and 73-2. The sum circuits 73-1 and 73-2 each compute a sum of all input products to produce the probability αS,k. The probability αS,k is fed back to the memory 71-3 or 71-4 to be stored therein as the probability αS,k−1.
To put it concretely, the multiplier 72-2 carries out a process according to the first equation from the top of Eqs. (3). The multiplier 72-4 carries out a process according to the second equation from the top of Eqs. (3). The multipliers 72-6, 72-8 and 72-10 each carry out a process according to the third equation from the top of Eqs. (3) in order to produce products, which are then supplied to the sum circuit 73-2.
The multipliers 72-1, 72-3 and 72-5 each carry out a process according to the fourth equation from the top of Eqs. (3) in order to produce products, which are then supplied to the sum circuit 73-1. The multiplier 72-7 carries out a process according to the fifth equation from the top of Eqs. (3). The multiplier 72-9 carries out a process according to the sixth equation from the top of Eqs. (3).
As described above, the sum circuits 73-1 and 73-2 of the circuit 70 each need to compute the sum of three products so that it takes long time to sum up the products in comparison with a circuit for computing the sum of only two products. As a result, the time it takes to carry out the decoding process itself also increases as well.
FIG. 10 is a diagram showing a typical configuration of a circuit 80 employed in the conventional decoder, which is used for carrying out a decoding process for every time instant by adoption of the Log-MAP algorithm, as a circuit for updating the degree of likelihood of each state S in accordance with the trellis shown in FIG. 6.
First of all, the following description explains equations for updating the degree of likelihood of each state S by adoption of the Log-MAP algorithm.
In accordance with the Log-MAP algorithm, the likelihood αS,k of state S after a transition at a time k is updated in accordance with likelihood updating equations expressed by Eqs. (5) on the basis of the probability αs′,k−1 of state S′ preceding the transition at a time (k−1) and a transition likelihood γ of the branch. In the likelihood updating equations expressed by Eqs. (5), the probability αS,k of state S after a transition at a time k is the path information cited earlier. The transition is a transition tracing the trellis in the positive direction of the trellis. On the other hand, the transition probability γ of the branch is the branch information mentioned before.αS000,k=ln(exp(αS100,k−1+γ0,k))αS100,k=ln(exp(αS10,k−1+γ4,k))αS10,k=ln(exp(αS01,k−1+γ2,k)+exp(αS011,k−1+γ6,k)+exp(αS111,k−1+γ6,k))αS01,k=ln(exp(αS10,k−1+γ5,k)+exp(αS100,k−1+γ1,k)+exp(αS000,k−1+γ1,k))αS011,k=ln(exp(αS01,k−1+γ3,k))αS111,k=ln(exp(αS011,k−1+γ7,k))  (5)
It is to be noted that the likelihood values γ0,k to γ7,k used in Eqs. (5) are expressed by Eqs. (4). In this case, however, notation γk (S′, S) used in Eqs. (4) denotes the degree of likelihood of a branch resulting in a transition from state S′ to state S at a time k.
In the expressions on the right side of some equations of Eqs. (5), it is necessary to compute the value of a logarithmic function of a sum of 3 or more exponential-function values. In this case, the likelihood αS,k is updated by repeatedly carrying out a computation according to a likelihood updating equation expressed by Eq. (6) given below. By the way, the computation according to Eq. (6) is referred to as Log-Sum processing:ln(exp(A)+exp(B))=max(A,B)+ln(1+exp(−|B−A|)  (6)
Eqs. (5) used as the basis to carry out the process to update the likelihood αS,k by adoption of the Log-MAP algorithm is obtained from Eqs. (2) used as the basis to carry out the process to update the path metric PMS,k by adoption of the Viterbi algorithm by:
replacing the path metric PMS,k with the likelihood αS,k;
replacing the branch metric BM with the likelihood γ;
replacing selection of a maximum value with the Log-Sum processing described above; and
replacing addition of the path metric PMS,k to the branch metric BM with addition of the likelihood αS,k to the likelihood γ.
Thus, the circuit 80 shown in FIG. 10 has a configuration obtained from the configuration of the ACSU 50 shown in FIG. 7 by:
replacing the memories 51-1 to 51-6 of the configuration of the ACSU 50 with memories 81-1 to 81-6;
replacing the adders 52-1 to 52-10 of the configuration of the ACSU 50 with multipliers 82-1 to 82-10; and
replacing the max circuits 53-1 and 53-2 of the configuration of the ACSU 50 with long-sum circuits 83-1 and 83-2.
That is to say, the circuit 80 shown in FIG. 10 includes the memories 81-1 to 81-6, the adders 82-1 to 82-10 as well as the log-sum circuits 83-1 and 83-2.
The adders 82-1 to 82-10 receive likelihood values αS,k−1 from the memories 81-1 to 81-6 and the likelihood values γ from an external source. The adders 82-1 to 82-10 each add the likelihood αS,k to the likelihood γ and output a sum obtained as a result of the addition. Some of the sums are fed back to the memories 81-1, 81-2, 81-5 and 81-6 to be stored therein as the likelihood αs,k−1. Instead of being fed back to the memories 81-1, 81-2, 81-5 and 81-6, the remaining sums are supplied to the log-sum circuits 83-1 and 83-2. The log-sum circuits 83-1 and 83-2 each carry out the log-sum processing on all input sums in accordance with Eq. (6) given earlier to produce the likelihood αS,k. The likelihood αS,k is fed back to the memory 81-3 or 81-4 to be stored therein as the likelihood αS,k−1.
To put it concretely, the adder 82-2 carries out a process according to the first equation from the top of Eqs. (5). The adder 82-4 carries out a process according to the second equation from the top of Eqs. (5). The adders 82-6, 82-8 and 82-10 each carry out a process according to the third equation from the top of Eqs. (5) in order to produce products, which are then supplied to the log-sum circuit 83-2.
The adders 82-1, 82-3 and 82-5 each carry out a process according to the fourth equation from the top of Eqs. (5) in order to produce products, which are then supplied to the log-sum circuit 83-1. The adder 82-7 carries out a process according to the fifth equation from the top of Eqs. (5). The adder 82-9 carries out a process according to the sixth equation from the top of Eqs. (5).
As described above, the log-sum circuits 83-1 and 83-2 employed in the circuit 80 each need to carry out log-sum processing on three inputs so that it takes long time to complete the processing in comparison with a circuit for carrying out log-sum processing on only two inputs. As a result, the time it takes to carry out the decoding process itself also increases as well.
FIG. 11 is a diagram showing a typical configuration of a circuit 90 employed in the conventional decoder, which is used for carrying out a decoding process for every time instant by adoption of the Max-Log-MAP algorithm, as a circuit for updating the degree of likelihood of each state S according to the trellis shown in FIG. 6.
First of all, the following description explains equations for updating the degree of likelihood of each state S by adoption of the Max-Log-MAP algorithm.
In accordance with the Max-Log-MAP algorithm, the likelihood αS,k of state S after a transition at a time k is updated in accordance with likelihood updating equations expressed by Eqs. (7) given below on the basis of the probability αs′,k−1 of state S′ preceding the transition at a time (k−1) and a transition likelihood γ of the branch. In the likelihood updating equations expressed by Eqs. (7), the probability αS,k of state S after a transition at a time k is the path information cited earlier. The transition is a transition tracing the trellis in the positive direction of the trellis. On the other hand, the transition probability γ of the branch is the branch information mentioned before. The likelihood updating equations expressed by Eqs. (7) are obtained from Eqs. (5) by replacing the log-sum processing of Eqs. (5) with processing to select a maximum value as follows:αS000,k=ln(exp(αS100,k−1+γ0,k))αS100,k=ln(exp(αS10,k−1+γ4,k))αS10,k=max(ln(exp(αS01,k−1+γ2,k)),ln(exp(αS011,k−1+γ6,k)),ln(exp(αS111,k−1+γ6,k)))αS01,k=max(ln(exp(αS10,k−1+γ5,k)),ln(exp(αS100,k−1+γ1,k)),ln(exp(αS000,k−1+γ1,k)))αS011,k=ln(exp(αS01,k−1+γ3,k))αS111,k=ln(exp(αS011,k−1+γ7,k))  (7)
Eqs. (7) used as the basis to carry out the process to update the likelihood αS,k by adoption of the Max-Log-MAP algorithm is obtained from Eqs. (2) used as the basis to carry out the process to update the path metric PMS,k by adoption of the Viterbi algorithm by:
replacing the path metric PMS,k with the likelihood αS,k;
replacing the branch metric BM with the likelihood γ; and
replacing addition of the path metric PMS,k to the branch metric BM with addition of the likelihood αS,k to the likelihood γ.
Thus, the circuit 90 shown in FIG. 11 has a configuration obtained from the configuration of the ACSU 50 shown in FIG. 7 by:
replacing the memories 51-1 to 51-6 of the configuration of the ACSU 50 with memories 91-1 to 91-6;
replacing the adders 52-1 to 52-10 of the configuration of the ACSU 50 with multipliers 92-1 to 92-10; and
replacing the max circuits 53-1 and 53-2 of the configuration of the ACSU 50 with max circuits 93-1 and 93-2.
That is to say, the circuit 90 shown in FIG. 11 includes the memories 91-1 to 91-6, the adders 92-1 to 92-10 as well as the max circuits 93-1 and 93-2.
The adders 92-1 to 92-10 receive likelihood values αS,k−1 from the memories 91-1 to 91-6 and the likelihood values γ from an external source. The adders 92-1 to 92-10 each add the likelihood αS,k to the likelihood γ and output a sum obtained as a result of the addition. Some of the sums are fed back to the memories 91-1, 91-2, 91-5 and 91-6 to be stored therein as the likelihood αS,k−1. Instead of being fed back to the memories 91-1, 91-2, 91-5 and 91-6, the remaining sums are supplied to the max circuits 93-1 and 93-2. The max circuits 93-1 and 93-2 each select the largest value among all input sums in order to produce the likelihood αS,k. The likelihood αS,k is fed back to the memory 91-3 or 91-4 to be stored therein as the likelihood αS,k−1.
To put it concretely, the adder 92-2 carries out a process according to the first equation from the top of Eqs. (7). The adder 92-4 carries out a process according to the second equation from the top of Eqs. (7). The adders 92-6, 92-8 and 92-10 each carry out a process according to the third equation from the top of Eqs. (7) in order to produce products, which are then supplied to the max circuit 93-2.
The adders 92-1, 92-3 and 92-5 each carry out a process according to the fourth equation from the top of Eqs. (7) in order to produce products, which are then supplied to the max circuit 93-1. The adder 92-7 carries out a process according to the fifth equation from the top of Eqs. (7). The adder 92-9 carries out a process according to the sixth equation from the top of Eqs. (7).
As described above, the max circuits 93-1 and 93-2 of the circuit 90 each need to select a largest value among three input sums so that it takes long time to complete the processing in comparison with a circuit for selecting a largest value among only two sums. As a result, the time it takes to carry out the decoding process itself also increases as well.