Both the magnitude and phase shift attributable to a communication channel often vary over time. In the case of a wireless communication channel, such changes can result from any of the number of different factors, e.g., related to movement of a mobile wireless device, and/or to changes in the environment between the two communicating devices (e.g., a cellular telephone and a base station) irrespective of any movement of either device.
In order to properly interpret received data, it is desirable for the wireless receiver to be able to predict or estimate such parameters for each time frame in which data are received. Unfortunately, in addition to the fact that the characteristics of the channel itself change over time, any transmitted information also is corrupted by noise (usually is modeled as white Gaussian noise), making accurate predictions or estimations of the channel more difficult.
Channel estimation often is performed using a technique in which the transmitter sends not only information symbols, but periodically also sends predetermined values (typically referred to as pilot signals). Then, any deviations in the received signal from the expected pilot can be attributed either to the communication channel or to noise. The main problem of both channel estimation and channel prediction is to be able to characterize the channel in the presence of such noise.
In addition to using the pilot signal for channel estimation, the received information symbols also can be used for this purpose. That is, the received and decoded symbols often can be assumed to be correct, so the channel can be estimated based on the signal that actually was received as compared with the signal that presumably was sent by the transmitter.
Channel prediction techniques generally rely on the assumption that channel characteristics will be correlated from one timeframe to the next and frequently can be modeled as a function that varies smoothly and not too quickly over time. Accordingly, channel prediction uses channel estimates from one or more time frames in order to predict the channel at a different time frame. From a frequency-domain perspective, the prediction technique generally operates as a low-pass filter, filtering out the higher frequencies of the added noise, and thereby improving the channel measurement signal-to-noise ratio (SNR).
One category of channel prediction uses only past and current channel estimates to predict the next subsequent channel value. Unfortunately, these techniques often suffer from accuracy limitations, essentially requiring extrapolation to the next time frame.
In certain implementations, prior to channel prediction a filtering operation is performed using past, current and subsequent channel estimates. The goal of such filtering is to reduce the channel estimate noise before prediction. In other words, noisy channel estimates are filtered and then subsequently fed into the channel predictor. From the channel predictor's viewpoint, the input SNR is improved. However, the delay introduced by such techniques require the channel predictor to predict the channel farther away from the enhanced channel estimates, thereby detracting from the prediction.
In other words, the filtering used in such techniques tends to reduce sample noise, but the additional delay makes prediction more difficult due to the increased prediction distance, particularly when the channel is changing rapidly. In order to further reduce sample noise, the filter requires longer delays. As a result, there exists a limit on the sample SNR enhancement because longer filtering generally will reduce the benefit of the SNR-enhanced samples.