The goal of a modern digital communication system design is to send the maximum amount of information possible across a band-limited channel using the least amount of power. Specifically, the designer's goals are to maximize transmission bit rate R, minimize probability of bit error Pb, minimize the required bit energy to noise power Eb/N0, and minimize the required system bandwidth W. However, these goals are in conflict with each other which necessitates the trading off of one system goal with another.
Modulation and coding techniques allow the designer to trade-off power and bandwidth to achieve a desired bit error rate (BER). This trade-off is illustrated via a family of Pb versus Eb/N0 curves for the coherent detection, of orthogonal signaling and multiple phase signaling in FIG. 1 and FIG. 2. These signaling schemes are called M-ary because they process k bits per unit interval. The modulator uses one of its M=2k waveforms to represent each k-bit message sequence.
For orthogonal signal sets (FIG. 1), such as frequency shift keying (FSK) modulation, increasing the size of the symbol set reduces the Eb/N0 required at the expense of more bandwidth. In FSK, different frequencies represent different symbols so the signals are orthogonal. That is, the transmission of one symbol doesn't effect the other symbols and all symbols require the same energy-per-symbol. Increasing the number of symbols increases the number of bits-per-symbol so the energy-per-bit decreases.
For non-orthogonal signal sets (FIG. 2), such as multiple phase shift keying (MPSK) modulation, increasing the size of the symbol set requires an increased Eb/N0. In MPSK, different carrier phase increments represent different symbols. As the number of symbols increases, the spacing between phase increments decrease requiring more energy-per-symbol. Although the bandwidth doesn't increase, the required energy-per-symbol increases more than the bits-per-symbol thus requiring an increase in energy-per-bit.
Another designer option is to reduce power requirements using FEC codes. These codes insert structured redundancy into the source data so that the presence of errors can be detected and corrected. Two categories of FEC codes are block codes and convolutional codes. In the case of block codes, the source data is segmented into blocks of k message bits to which are appended n−k redundant bits to form a n-bit codeword. The ratio of message bits to codewords (k/n) is called the code rate. The code rate is the portion of the codeword that carries message information.
A convolutional code is described by the three integers n, k, and K, where the ratio k/n has the same code rate significance as for block codes. Convolutional codes are not formed into blocks as in block codes, but rather are formed continuously by an encoding shift register. The integer K is called the constraint length and the length of the encoding shift register is equal to K−1.
FEC codes correct some of the errors in the received data which allows a lower power to be used to transmit the data. Coding gain is defined as the reduction, expressed in decibels, in the required Eb/N0 to achieve a specified error performance of a FEC coded system over an un-coded one with the same modulation. The coding performance for coherently demodulated BPSK over a Gaussian channel for several block codes is illustrated in FIG. 3 and for various convolutional codes in FIG. 4.
Block codes and convolutional codes achieve an improvement in BER by bandwidth expansion. That is, a k-bit message sequence is replaced by a n-bit codeword. To achieve the same message bit rate, the channel bandwidth would have to be increased by n/k.
Trellis-coded modulation (TCM) combines modulation and coding schemes to achieve coding gain without a bandwidth expansion. Redundancy is provided by increasing the signal alphabet through multilevel/phase signaling, so the channel bandwidth is not increased.
FEC coding is a mature field of communications technology. Many types of codes have been developed which are sometimes combined to achieve various performance goals. Previously, Viterbi convolutional codes combined with Reed-Solomon block codes have provided what were at one time viewed as being acceptable coding gains. But, the conventional TCM schemes and the convolutional encoding schemes failed to achieve the higher levels of performance that are now provided through the use of turbo codes. Turbo codes are two or higher dimensional block codes that use an iterative feedback decoding scheme. Unfortunately, turbo codes require processing over an extended time before final decisions are produced and therefore cause the communication system to experience undesirably lengthy latency periods. It is believed that the conventional TCM and convolutional encoding schemes failed to achieve the higher levels of performance due, at least in part, to an excessive sensitivity to small Euclidean distances (ED's) between nearest-neighbor phase points in a phase constellation.