This invention relates to transmitting digital symbols over band-limited channels using a modulated carrier system of the type including an encoding circuit for encoding said symbols into discrete signals selected from an available signal alphabet, and in which dependencies are introduced between successive signals in the sequence to increase immunity to noise and distortion.
In conventional data transmission systems in which N-bit data symbols are each represented by a unique signal drawn from an alphabet of 2.sup.N such signals, each selection of a signal to be transmitted depends only upon the data symbol represented by that signal; there is thus a one-to-one correspondence between the set of 2.sup.N different data symbols and the signal alphabet.
Csajka et al., U.S. Pat. No. 3,877,768, and Ungerboeck, "Channel Coding with Multilevel/Phase Signals," IEEE Transactions on Information Theory, Vol. IT-28, No. 1, January, 1982, describe systems in which each signal selection depends not only upon the symbol to be represented, but also upon previous signal selections. The conventional 2.sup.N signal alphabet is doubled in size to 2.sup.N+1. An encoder maps the 2.sup.N different data symbols into 2.sup.N+1 different coded symbols, each represented as N+1 bits. Finite-state memory in the encoder causes each (N+1)-bit coded symbol to depend not only on the current N-bit symbol but also on the previous data sequence which is reflected in the state of the finite-state memory. The (N+1)-bit coded symbols are then represented by a sequence of alphabet signals. The alphabet signals are divided into subsets which are disjoint (i.e., have no signals in common), each subset corresponding to transitions from one state to a particular subsequent state of the finite-state memory. The effect of the coding is to permit only certain sequences of alphabet signals to be transmitted, and the coded dependency information carried by every signal is exploited at the receiver through use of a maximum likelihood sequence estimation decoding technique (e.g., one based on the Viterbi Algorithm, as described in Forney, "The Viterbi Algorithm," Proceedings of the IEEE, Vol. 61, No. 3, March 1973, incorporated herein by reference); in such a technique, instead of decoding each received signal independently into the alphabet signal most likely to have been sent (i.e., the alphabet signal closest to the received signal in the sense of Euclidean distance, if the noise can be regarded as Gaussian), decoding decisions are delayed for a predetermined number of signal intervals to permit each decision to be made in a way that results in a sequence of received signals being decoded into the sequence of alphabet signals most likely to have been sent (i.e., into the sequence of received signals closest in the sense of the algebraic sum of Euclidean distances or vector Euclidean distance).
The so-called coding gain (i.e., increased resistance to Gaussian noise) achieved by Csajka and Ungerboeck is of course reduced by the additional power needed to transmit the larger signal alphabet, subject to maintaining the same minimum distance between signal points.