1. Field of the Invention
The present invention generally relates to channel coding used in communication systems and particularly in wireless communication systems.
2. Description of the Related Art
Channel coding is a well known technique used in communication systems to combat adverse effects of noise on transmitted signals propagating through communication channels of the communication systems. One type of channel coding is known as Forward Error Correction coding in which information prior to being transmitted over a particular channel is processed so as to better withstand the anomalous effects of the channel. The channel coding adds redundancy to the information to improve the probability that the information is properly decoded once received. The channel coding that is used can be any well known type of information coding such as Block codes or convolutional codes. Convolutional coding is a mapping of the information bits (to be transmitted) to encoder bits. The encoder is a particular processor that operates in accordance with a specific coding scheme usually represented by a state diagram commonly referred to as a trellis. The trellis shows the different states that the encoder has and how the encoder moves from one set of states to another set of states as it is processing the information. The possible transitions from one set of states to other sets of states are shown by the trellis. Thus, the encoder operates (i.e., moves from a set of states to another set of states) as per the trellis. In sum, at a particular point in time, the coder has a certain number of states each of which has a particular value and each such state can transition to one or more other states.
Convolutional codes which are generated in recursive fashion are known as concatenated convolutional codes or Turbo codes. The concatenated convolutional coding can be performed either serially (Serial Concatenated Convolutional Coding or SCCC) or in parallel fashion (Parallel Concatenated Convolutional Coding or PCCC). SCCC and PCCC coders and/or decoders are referred to as Turbo coders and decoders. A turbo decoder is a device that is used to decode information that has been encoded by a turbo encoder and possibly has been processed by other coding devices. Referring to FIG. 1 there is shown an example of a turbo encoder 100 comprising two substantially identical Recursive System Coders (RSC) 102, 106 and one interleaver 104. Interleaver 104 operates as any well known interleaver which alters the time order of the information bits applied to it. The turbo coder of FIG. 1 generates a codeword comprising a systematic bit and two parity bits. The systematic bit is essentially an information bit.
Referring now to FIG. 2, there is shown a standard configuration for a turbo decoder. Turbo decoder 200 comprises SISO (Soft Input Soft Output) devices 202 and 206. A SISO device receives soft information, processes such information in accordance with a particular algorithm or processing method and outputs soft information that can be used to make a hard decision about the received information or can be used for further processing. The soft information is probability data on the received information where such data give an indication of the confidence that is to be attributed to the value of the received information. For example, if the received information was decoded to be a “0” bit, the soft information associated with that received information gives an indication of how likely that the original information was indeed a “0”bit. The SISO device also generates additional soft information as it is processing the input information; the difference between the additional generated soft information and the soft information at the input is called extrinsic information. In many applications where a SISO device is used, the extrinsic information is recursively inputted as soft input information to allow the SISO to generate more reliable soft information about a particular received information. The SISO devices may process the soft information in accordance with a well known algorithm called the Log MAP (Maximum A Posteriori) algorithm. When the SISO devices process soft information as per the Log MAP algorithm, they are called Log MAP processors.
The Log MAP algorithm is a recursive algorithm for calculating the probability of a processing device being in a particular state at a given time based on received information. The probabilities are calculated by forward recursions and backward recursions over a defined time window or a block of information. The Log MAP algorithm essentially is the recursive calculation of probabilities of being in certain states based on received information and the a priori probabilities of going to specific states from particular states. The states describe the condition of a process that generates the information that is ultimately received. The LogMAP algorithm and how a LogMAP processor operates are often represented by a trellis which has a certain number of states. Each state has a probability associated with it and transition probabilities indicating the likelihood of transitioning from one state to another state either forward or backward in time. In general each state in a trellis has a number of transition probabilities entering it and leaving it. The number of probabilities entering or leaving states of a trellis is referred to as the radix. Thus in a Radix-2 trellis, each state has two entering and two exiting transition probabilities. The trellis shows the possible transition between states over time. In general a Radix-K trellis has K branches entering and K branches leaving each state in the trellis. The output of the Log MAP algorithm is called the LLR (Log Likelihood Ratio) which represents the probability that the original information (i.e., information prior to exposure to any noisy environment and prior to any processing) was a certain value. For example, for digital information, the LLR represents the probability that the original information was either a “0” bit or a “1” bit given all of the received data or observations.
Still referring to FIG. 2, turbo decoder 200 further comprises interleaver 204 and deinterleaver 208. Deinterleaver 208 performs a reverse interleaving operation. Received samples YP1 and YS are applied to Log MAP processor 202 and received sample YP2 is applied to LogMAP processor 206 as shown. Turbo decoder 200 generates a Log Likelihood output. Interleaver 204, deinterleaver 208 and LogMAP processors 202 and 206 all share buffers and memory locations to retrieve and store extrinsic information. Boundary 210 symbolically represents the two memory spaces (for interleaver 204, deinterleaver 208 and the LogMAP processors) which are addressed differently. The side of boundary 210 where LogMAP processor 202 is located has memory for storing extrinsic information where such information is stored in memory having sequential memory addresses. In other words, the information that is to be retrieved is located in contiguous memory locations. However, because of the alteration in the time order of extrinsic information stored on the other side of boundary 210 (i.e., side where LogMAP processor 206 is located), the extrinsic information is not retrieved from sequential memory locations; unlike the sequential case where only one memory address need be known and the other memory address is simply the next higher address, two distinct memory addresses are used to retrieve the extrinsic information.
The retrieval of information from two memory addresses in a sequential manner therefore reduces the speed of operation of the turbo decoder. To resolve this decrease in speed of operation, the extrinsic memory is replicated a certain number of times depending on the radix value of the turbo decoder. For example, for a Radix-4 turbo decoder, the extrinsic memory is duplicated. For a Radix-8 turbo decoder, there are three blocks of extrinsic memory with same addresses and same contents. In general, for a Radix-K turbo decoder there are log2 K blocks of extrinsic information memory where all of them have the same addresses and the same contents stored at the addresses; that is the extrinsic memory is replicated and corresponding memory addresses contain identical information at all times. In this manner multiple retrieval of extrinsic information can be done at a particular instant. Note that the multiple addresses generated can have the same value, but the actual values retrieved will be from different memory blocks. The replicated extrinsic information memories are independent of each other meaning that accessing information from one extrinsic memory does not, in any manner, affect any other extrinsic memory.
As the design of wireless communication systems evolves into systems with relatively higher data rates, a need has risen to process more information per clock cycle. A clock cycle is a unit processing time for a processor such as a Turbo coder or decoder. Turbo coding and decoding has evolved as the channel coding of choice in many wireless communication systems.
Referring to FIG. 3 there is shown a Parallel Concatenated Convolution Code (PCCC) decoder 300. PCCC decoder 300 comprises deinterleaver 302 coupled to Soft Input Soft Output (SISO) decoder 304 which is coupled to interleaver 306. Deinterleaver 302 performs the reverse operation of interleaving; that is, interleaved information is reordered so that the information returns to its original order. The turbo decoder of FIG. 3 processes codewords received over a communication channel to soft outputs and/or information bits. SISO device 304 receives soft information, processes such information in accordance with a particular algorithm (e.g., Log MAP algorithm) or processing method and outputs soft information that can be used to make a hard decision about the received information or can be used for further processing. The SISO device can process the received codewords in accordance with the well known Log MAP algorithm; in such a case, the SISO device is referred to as a Log MAP processor. Typically a Log MAP processor has a radix-2 trellis meaning that it processes states that have two entering transition probabilities and two exiting transition probabilities. A radix-2 trellis processes one bit per unit time. To satisfy the need for higher capacity communication systems, the known art has developed higher radix turbo decoders that can process relatively more information per unit time than radix-2 turbo decoders.
In particular, the known art has an N-state radix-K turbo decoder using the PCCC architecture where N is a power of 2 integer equal to 2 or greater and K is an integer equal to 4 or greater. Referring to FIG. 4, there is shown an 8-state radix-4 trellis under which the turbo decoder of FIG. 3 operates. Note that αtj, which is called a forward path metric, represents the probability of being in state j at time t for a forward recursion; βtj, which is called a backward path metric, also represents the probability of being in state j at time t given the received information. Also, although not shown in FIG. 4, associated with the trellis are branch metrics; γti,k is a branch metric which represent the probability of observing the received information given the transition from state i to state k and arriving at state k at time t. The PCCC turbo decoder shown in FIG. 3 and in particular SISO processor 304 operates as per trellis of FIG. 4 and SISO has an internal structure shown in FIG. 5.
The SISO processor shown in FIG. 5 comprises Branch Metric Calculator (BMC) 502 in communication with Path Metric Calculators (PMC) 504 and 506 which are coupled to Log Likelihood (LL) calculators 508 and 510 via path metric buffers 512 and 514. The calculated branch metrics for different times are stored in buffers 501 and 503. The branch metrics are calculated for a stream of information partitioned into time windows. As shown in FIG. 4, the current time window is W time units in length where W is an integer. The branch metrics are calculated from input symbols applied to input buffer 505 and from soft information processed by interleaver/deinterleaver 526. The input symbols are the codewords received by the turbo decoder.
The LL calculators use the calculated path metrics to calculate log likelihood transition terms. The LL calculators are coupled to subtracting circuits to calculate the difference between their outputs and an extrinsic information (i.e., a type of soft information) input resulting in a Log Likelihood Ratio (LLR) output. LLR circuits 516 and 518 are subtractor circuits; they calculate the difference between log likelihood transition terms and extrinsic information stored in FIFO (First In First Out memory) 519. The LLR outputs are stored into output buffer 524 which provides decoded bits. The LLR outputs when not construed as decoded bits are applied to interleaver/deinterleaver circuit 526 comprising interleaver/deinterleaver address generator 520 coupled interleaver/deinterleaver 522. Circuit 526 thus operates as either an interleaver or deinterleaver. The LL calculators 508 and 510 and path metric calculators are constructed with Log Sum operators designed with an Add Compare Select (ACS) architecture.
The ACS architecture is based on a definition of the Log Sum operation called the Jacobian relationship; the ACS architecture uses an approximation of the Jacobian relationship. The Jacobian relationship defines a Log Sum operation in which a Log Sum operator logarithmically combines sums of branch metrics and path metrics. The Log Sum operation for inputs A1, A2, A3, and A4 is defined by the Jacobian relationship as follows:Log Sum(A1,A2,A3, . . . )=max (A1,A2,A3, . . . )+ƒ(A1,A2, A3, . . . ) where ƒ(A1,A2,A3, . . . )=log (exp(−Δ1)+exp (−Δ2)+exp (−Δ3)+ . . . ) where Δi=Ai−min (A1,A2,A3, . . . ).
Referring to FIGS. 6A and 6B there is shown an SCCC encoder and decoder respectively. SCCC encoder of FIG. 6A comprises outer RSC 602 coupled interleaver 604 which is coupled to inner RSC 606. RSC 602 is different from RSC 606 in that it operates (i.e., encodes information applied to it) in accordance with a trellis having a certain number of states that is different than the number of states of the trellis which is used by inner RSC 606. For example, RSC 602 may operate as per a 16-state trellis whereas RSC 606 operates as per a 4-state trellis. Because the RSC's are necessarily different two separate such circuits are to be built for an SCCC coder. Similarly, FIG. 6B shows an architecture for an SCCC decoder comprising Inner SISO 608 coupled to Outer SISO 614 via Interleaver 610 and Deinterleaver 612. As with the RSC's, the Inner and Outer SISO's operate in accordance with different trellises having different number of states. Therefore, depending on the requirements of the communication system within which the decoder is to be used, different decoders have to be built for different requirements.
Not only are the SISO's for a particular SCCC decoder are different, but different decoders may have to be designed for different parts (i.e., different communication channels) of a communication system. Further, because the inner SISO and the outer SISO operate as per different trellises each such SISO necessarily will use different memories to perform its decoding operation. In sum, the requirements for a communication system result in burdensome equipment and design specifications for communication system designers who may have to build a plurality of specific SCCC coders and decoders to meet such requirements. As a response to the burdensome requirements of different SCCC coders and decoders the known art has developed a technique for processing the information as per a trellis regardless of the number of states contained in the trellis. Further, the same hardware or processing equipment can be used to process information using different types of trellises.
Referring to FIG. 7 there is shown a 32-state trellis depicting the possible transition of the states from time t to time t+1. A state at time t is represented by St and one time unit later at time t+1 each state is represented as St+1. The same trellis is depicted in another format called the “butterfly.” The particular example to be discussed uses a technique called “in-place addressing” applied to an SCCC decoder where 8 states from the trellis are processed during a clock cycle. A clock cycle represents the basic unit time period. The example describes the processing of forward path metrics as per a 32-state trellis and is depicted in FIG. 8. Backward path metrics can also use the in place addressing technique.
The technique, referred to as in-place addressing, uses the same memory locations to read and write path metric values as information applied to a decoder is being processed as per a particular trellis. The technique is able to process information as per different trellises having different number of states. Thus, for example, the equipment can be configured to implement the SISO processors can be processing information as per a 16-state trellis and the same equipment can be reconfigured for processing other information as per a 4-state trellis. The ability for a SISO processor to process information differently at different times reduces significantly the burdensome equipment requirements of SCCC decoders. Further the in-place addressing technique allows a turbo decoder to process a portion of the states of the trellis during a particular clock cycle; this allows processing of information as per an N-state trellis—where N is a relatively large number—without the need for burdensome equipment (i.e., hardware and/or software) requirement. N is a power of 2 integer equal to 2 or greater.
In FIG. 8, the memory locations for each of the 32 states are labeled accordingly in the SOURCE memory block. The SOURCE memory block is divided into four columns (cols. I–IV) where each column contains 8 memory locations and each column is divided into an upper portion and a lower portion. For example, for column I, the upper portion contains memory locations 0–3 and the lower portion contains memory locations 4–7. The memory locations represent states of a trellis that contain values for path metrics. The DESTINATION memory block is the same memory block used for the SOURCE memory block. A separate DESTINATION memory block is shown only for facilitating the explanation on how the in place addressing technique is achieved; in fact only one block of memory is used. As information is processed as per a trellis structure such as the butterfly structure shown in FIG. 7, the trellis determines the destination state for each of the starting states in the SOURCE memory block. In FIG. 8, the destination states or new states are shown by the BUTTERFLY mapping for each of the four groups of 8 states. The goal of the in-place addressing technique is to rearrange each group of 8 new states into their original order and store these rearranged new states in the same memory block from which they were retrieved thereby allowing the SISO device to process different size trellises using the same memory blocks. In short the same memory block is used for storing SOURCE states and DESTINATION states at different times. The SOURCE states are retrieved from a memory block and the DESTINATION states are stored in that same memory block. As the DESTINATION states (also called ‘new states’) are determined by the trellis structure, some of these states are temporarily stored in a HOLD register to allow each group of 8 states to be rearranged into their original order. Therefore, as shown in FIG. 8, each group of 8 states is retrieved and applied to a trellis circuit (e.g., digital combinatorial logic circuit) that determines the destination states and such destination states are stored back into the same memory block keeping the original order of each block of 8 states; maintaining the original order of a block of states is called ‘the order requirement.’ When a portion of the destination states are such that they cannot be stored back into the same block of memory due to the order requirement, the portion is stored temporarily in a hold register until it can be stored back into the same memory block in the proper locations so as to satisfy the order requirement. For the example given, 32 states are retrieved 8 states at a time from memory locations 0–31 and the resulting 32 new states are rewritten into the same memory locations 0–31 with each group of 8 new states complying with the order requirement. The particular steps in implementing the in-place addressing algorithm are shown in FIG. 8 as follows:                STEP 1A: empty HOLD register        STEP 1B: apply col. I to trellis circuit resulting in new states 0–3 and 16–19 as shown in the first column of the Butterfly mapping. Col. I is now ready to receive new states.        STEP 2A: store new states 0–3 in col. I upper.        STEP 2B: store new states 16–19 in HOLD register.                    AT THIS POINT COL. I LOWER IS READY FOR NEW STATES 4–7 TO COMPLY WITH THE ORDER REQUIREMENT. NEW STATES 16–19 CANNOT BE STORED INTO COL. I LOWER BECAUSE THAT WOULD BE A VIOLATION OF THE ORDER REQUIREMENT. FURTHER NEW STATES 16–19 CANNOT BE STORED BACK INTO ANY OTHER COLUMNS BECAUSE THE OTHER COLUMNS HAVE NOT BEEN APPLIED TO THE TRELLIS CIRCUIT AND THUS CANNOT YET BE OVERWRITTEN WITH NEW STATES.                        STEP 3A: apply col. II to trellis circuit resulting in new states 4–7 and 20–23. Col. II is now ready to receive new states.        STEP 3B: transfer new states 4–7 into col. I lower; col. I is now full with new states.        STEP 4A: transfer new states 16–19 from HOLD register to col. II upper.        STEP 4B: store new states 20–23 into HOLD register.        STEP 5A: apply col. III to trellis circuit resulting in new states 8–11 and 24–27. Col. III is now ready to receive new states.        STEP 5B: transfer new states 20–23 from HOLD register to col. II lower        STEP 5B: store new states 8–11 in col. III upper.        STEP 6A: store new states 24–27 in HOLD register.        STEP 6B: apply col. IV to trellis circuit resulting in new states 12–15 and 28–31. Col. IV is now ready to receive new states.        STEP 7A: store new states 12–15 into col. III lower; col. III is now full.        STEP 7B: transfer states 24–27 from HOLD register to col. IV upper. Store 28–31 into HOLD register.        STEP 8A: transfer new states 28–31 from HOLD register into col. IV lower; col. IV is now full.        
The above description of the in-place addressing technique is for forward path metrics. A similar technique for backward path metrics can also be used where the SOURCE states are mapped into DESTINATION states as per a backward trellis structure similar to that shown in FIG. 7. Therefore, one memory block can be used to perform the trellis processing of a SISO for a particular trellis having a certain number of states.
Many state of the art wireless communication systems use turbo coding and decoding to process conveyed information. Some systems use SCCC while others use PCCC. It is desirable to use the PCCC decoder design described above because relatively more information can be processed per clock cycle resulting in relatively higher throughputs. At the same time, it is also desirable to use the in-place addressing technique described above because the same block of memory can be used to implement the inner and outer SISO processor resulting in an SCCC decoder that uses relatively less equipment.