A transmission channel conveys information from a sender to a receiver via propagation. Propagation can be air in the case of cellular mobile telephones, or any other means of propagation such as a cable, for example, in other applications.
A fundamental factor limiting the performance of a digital communication system is the phenomenon known as inter-symbol interference, which is well known to the person skilled in the art. Inter-symbol interference causes at the receiver level a tempory occupation of each symbol (e.g., bit) transmitted, which is longer than the initial duration of the symbol. The symbol may also be referred to as the bit time, for example.
Stated otherwise, the signal received at a given instant does not depend on one symbol alone (e.g., a bit), but also on the other bits or symbols sent which extend over durations greater than those of a bit time. The signal received at a given instant depends on the symbols concerned, and also on the adjacent symbols.
The causes of inter-symbol interference are multifold. One of them is due in particular to the multiple propagations of the signal between the sender and the receiver when the signal is reflected or diffracted by various obstacles, leading, on reception, to several signal copies mutually shifted in time. Moreover, this interference between symbols is produced not only by the propagation between the sender and the receiver, but also by the sending/receiving devices themselves (i.e., modulator, filter, etc.).
During communications with interference between symbols, the problem arises of estimating the impulse response of the transmission channel. The quality of this estimate depends on the capacity to eliminate the interference between symbols, and hence to take correct decisions regarding symbols sent.
Generally, the estimate of the impulse response of the channel, or more simply the channel estimate, is effected within the GSM telephone domain by using least squares techniques, and by using a predetermined sequence of symbols known to the sender and to the receiver. This is referred to by the person skilled in the art by the term training sequence. This training sequence is present within each symbol train or burst sent. When the characteristics of the channel are sufficiently well estimated, the estimated coefficients of the impulse response of the channel are used in an equalization processing operation, also well known to the person skilled in the art, to decode the signal received that is, to retrieve the logic values of the bits (data) sent in the train.
The equalization processing operation is conventionally followed by channel decoding processing operations intended for correcting any errors. The channel decoding is itself conventionally followed by another decoding operation called source decoding, which is intended for reconstructing the information (e.g., speech) initially coded at the level of the sender.
At the level of the receiver, a signal is received that comprises versions of the signal sent which are temporarily delayed with gains that may be different. The channel equalization processing includes reversing the effect of the channel to obtain samples representative of a single symbol. The Viterbi algorithm forms part of the conventional processing operations well known to the person skilled in the art for equalizing the channel during transmissions with inter-symbol interference.
More precisely, when the transmission channel has an impulse response with L coefficients for example, and delivers successive digital samples corresponding to successively transmitted symbols each of which can take M different possible values, the estimation of the successive values of the symbols by using the Viterbi algorithm comprises a stage-by-stage progression through a trellis. All the states of all the stages are respectively provided with “aggregate metrics” according to a terminology well known to the person skilled in the art. These aggregate metrics are, for example, error information aggregated (e.g., calculated with the aid of a Euclidean norm) between the observed values and the expected values of the samples. This is on the basis of an assumption regarding the values of the symbols.
In a conventional implementation, the number of states of the trellis is equal to ML−1, with M denoting the number of different possible values each of the symbols can take. In step n, that is to say on taking into account the sample of rank n, we take ML−1 decisions regarding the symbol of rank n−L+1. This is based upon that ultimately only one decision will have to be produced. This decision is obtained by backtracking through the most probable path. Each of these decisions, also commonly designated by the person skilled in the art as a hard decision, is provided with a symbol-confidence index or confidence.
Furthermore, at each node or state of the stage of rank n, M paths or transitions converge, respectively arising from M states or nodes of the preceding stage, and corresponding to the M different values of the symbol of rank n−L+1. The so-called surviving path, according to terminology known to the person skilled in the art, allows progression through the trellis from one stage to another and is chosen as having the minimum aggregate metric.
After a sufficiently long delay, typically a delay corresponding to 5L samples, all the surviving sequences are assumed to take the same hard decision with a high probability. A decision is then taken regarding the symbol of rank n−5L−(L−1). More precisely, to ascertain the hard decision and the symbol-confidence index associated with this decision, we backtrack through the path having the minimum aggregate metric.
Apart from the fact that such estimation processing makes it possible to obtain decisions regarding the symbols with a relatively large delay, it is necessary to store, during this time span, all the intermediate hard decisions along the various surviving paths or all the associated confidence indices. This is done to ultimately retrieve with regard to the symbol of rank n−5L−(L−1) its value and its symbol-confident index. This information is stored in two arrays, one of symbols and the other of symbol-confidence indices, which each including ML−1 rows and 5L columns. When the modulation is a high order, i.e., when M is large, this size is even larger.