Wireless cellular communication networks are well known and widely deployed, and provide mobile voice and data communications to millions of subscribers. In a cellular network, a fixed transceiver (base station, Node B, etc.) provides two-way radio communications with a plurality of subscribers within a geographic area, or cell (as used herein, the term sector is synonymous with cell). A sempiternal design goal of cellular communication networks is to efficiently and consistently deliver communication services to mobile subscribers at high data rates.
Many modern wireless communication protocols, such as High-Speed Downlink Packet Access (HSDPA) and the Long Term Evolution (LTE) of UTRAN utilize link adaptation to maximize the data rate of downlink communications under varying link quality. Link adaptation—also known in the art as adaptive modulation and coding—is a technique to maximize data rates by dynamically altering the modulation (e.g., QPSK, 16-QAM, 64-QAM), the level or degree of redundancy in Forward Error Correction (FEC) coding, and other signal and protocol parameters, to deliver the maximum rate to a UE given the radio link conditions. In link adaptation, the network transceiver selects from among a defined set of modulation techniques, coding schemes, and the like, based on an estimate of the instantaneous quality of the downlink channel to each UE. The Channel Quality Information is typically reported by the UE, and may comprise the Signal to Interference and Noise Ratio (SINR) measured or estimated by the UE. In Orthogonal Frequency Division Multiplexing (OFDM), the SINR vector over the sub-carriers allocated to a UE isSINR(t)=[SINR(k1;t)SINR(k2;t) . . . SINR(K;t)],where SINR(k;t) is the SINR at sub-carrier “k” (k=k1, k2, . . . , K) at time “t.”
The SINR(k;t) experienced by a UE, in general, depends on the desired signal transmitted to the UE, interference from transmissions to other UEs in the same sub-cell, interference from transmissions to other UEs in other sub-cells, and thermal noise. Conventional link adaptation can be described as UE-centric, in that each UE periodically measures SINR(k;t), and these measurements are reported to the network—with a delay of several Transmission Time Intervals (TTI)—on the uplink, e.g., in Channel Quality Information (CQI) reports. A significant shortcoming of such UE-centric link adaptation is that in packet-oriented cellular system, the own-cell and other-cell interference typically change from one TTI to the next, depending on scheduling at the network transceivers. Accordingly, the UE-reported SINR(k;t) is a very poor predictor of SINR(k; t+d), where “d” is a positive delay. This poor predication leads to underutilization of precious radio resources, and can significantly reduce the overall spectral efficiency of the system. Furthermore, attempts to improve the predictive value of UE-reported SINR(k; t+d) by increasing the CQI reporting frequency, to shorten “d,” increase uplink congestion and interference, and reduce the uplink data.
The accurate prediction of instantaneous SINR experienced at UEs, to enable fast and accurate link adaptation, stands as a major challenge in wireless communication network design and operation.