The present invention generally relates to wireless communication networks, and particularly relates to estimating received signal quality in such networks.
The term “link adaptation” in the context of wireless communication networks generally connotes the dynamic modification of one or more transmit signal parameters responsive to changing network and radio conditions. For example, evolving wireless communication standards define shared packet data channels that serve a number of mobile stations, also referred to as “users,” on a scheduled basis.
Wideband Code Division Multiple Access (WCDMA) standards define, for example, a High Speed Downlink Packet Access (HSDPA) mode wherein a High Speed Packet Data Shared Channel (HS-PDSCH) is used on a scheduled basis to transmit packet data to a potentially large number of users. The IS-856 standard defines a similar shared packet data channel service, known as High Data Rate (HDR), and cdma2000 standards, such as 1XEV-DO, define similar high-speed shared packet data channel services.
Generally, the shared packet data channels in all such services are rate controlled rather than power controlled, meaning that a channel signal is transmitted at full available power and the data rate of the channel is adjusted for that power based on the radio conditions reported for the mobile station being served at that particular instant in time. For a given transmit power, then, the data rate of the channel generally will be higher if the mobile station being served is in good radio conditions as opposed to a mobile station experiencing poor radio conditions. Of course, other parameters may bear on the data rate actually used, such as the type of service the mobile station is engaged in, etc.
Nonetheless, when serving a particular mobile station, the efficient utilization of the shared channel depends in large measure on the accuracy of the channel quality reported for that mobile station, since that variable represents a primary input into the data rate selection process. Simply put, if the mobile station has over-reported its channel quality, it is apt to be served at a too high a rate, leading to a high block error rate. Conversely, if the mobile station under-reports its channel quality it will be underserved. That is, it will be served at a data rate that is less than its actual channel conditions could support.
The under-reporting of channel quality may particularly arise when the apparent impairment (interference plus noise) at the receiver comprises both harmful and “benign” interference. As the term is used herein, “benign” interference is interference that affects the calculation of apparent signal quality but, in reality, does not overly degrade data signal demodulation. Thus, benign interference of a given power results in a much lower data error rate than would harmful interference of the same power. That means, if the signal quality target is, say, a 10% frame or block error rate, the receiver could achieve that target in the presence of a greater level of benign interference than could be tolerated if the interference were non-benign.
By way of non-limiting example, the total received signal impairment at a given communication receiver may comprise a Gaussian impairment component arising from same-cell interference, other-cell interference, thermal noise, etc., and a non-Gaussian impairment component arising from, for example, so called self-interference that occurs because of imperfect de-rotation of the received symbols. Other contributors to self-interference include local oscillator frequency errors, and rapid channel fading conditions. Such interference may take on a probability distribution defined by the modulation format, e.g., a binomial distribution associated with a Binary Phase Shift Keying (BPSK) modulation format.
Because the probability distribution of the non-Gaussian impairment does not include the characteristic “tails” of a Gaussian distribution, its effect on signal demodulation typically is not as severe as a Gaussian impairment. Indeed, the effect of even substantial amounts of non-Gaussian impairment may be relatively minor. Therefore, the conventional approach to estimation of received signal quality at a wireless communication receiver, which is based on the apparent, total signal impairment, i.e., the total impairment including Gaussian and non-Gaussian impairment components, may not provide a true picture of the receiver's current reception capabilities and, in fact, may cause the receiver to under-report its received signal quality by a significant amount.