Multi-antenna techniques can significantly increase the data rates and reliability of a wireless communication system. In particular, throughput and reliability can be drastically improved, in at least some radio environments, if both the transmitter and the receiver are equipped with multiple antennas. This arrangement results in a so-called multiple-input multiple-output (MIMO) communication channel; such systems and related techniques are commonly referred to as MIMO systems and MIMO techniques.
The LTE-Advanced standard is currently under development by the 3rd-Generation Partnership Project (3GPP). A core component in LTE-Advanced is the support of MIMO antenna deployments and MIMO related techniques for both downlink communications, i.e., base station to mobile station transmissions, and uplink communications, i.e., mobile station to base station transmissions. More particularly, a spatial multiplexing mode for uplink communications, referred to as single-user MIMO, or “SU-MIMO”, is under development. SU-MIMO is intended to provide mobile stations, called user equipment, or “UEs” in 3GPP terminology, with very high uplink data rates in favorable channel conditions.
SU-MIMO consists of the simultaneous transmission of multiple spatially multiplexed data streams within the same frequency bandwidth. Each of these multiplexed data streams is usually referred to as a “layer.” Multi-antenna techniques such as linear precoding are employed at the UE's transmitter in order to differentiate the layers in the spatial domain and to allow the recovering of the transmitted data at the receiver of the base station, which is known as an eNodeB, or eNB, in 3GPP terminology.
Another MIMO technique supported by LTE-Advanced is MU-MIMO, where multiple UEs belonging to the same cell are completely or partly co-scheduled in the same bandwidth and during the same time slots. Each UE in a MU-MIMO configuration may transmit multiple layers, thus operating in SU-MIMO mode.
To enable detection of all of the spatially-multiplexed data streams, the receiver must estimate an effective radio channel for each transmitted layer in the cell. Therefore, each UE needs to transmit a unique reference signal (RS) at least for each transmitted layer. The receiver, which is aware of which reference signal is associated to each layer, performs estimation of the associated channel by performing a channel estimation algorithm using the reference signal. The estimated channel is an “effective” channel because it reflects the mapping of the spatially multiplexed layer to multiple antennas. The estimate of the effective channel response is then employed by the receiver in the detection process.
Orthogonal Frequency-Division Multiplexing (OFDM) technology is a key underlying component of LTE. As is well known to those skilled in the art, OFDM is a digital multi-carrier modulation scheme employing a large number of closely-spaced orthogonal sub-carriers. Each sub-carrier is separately modulated using conventional modulation techniques and channel coding schemes. In particular, 3GPP has specified Orthogonal Frequency Division Multiple Access (OFDMA) for the downlink transmissions from the base station to a mobile terminal, and single carrier frequency division multiple access (SC-FDMA) for uplink transmissions from a mobile terminal to a base station. Both multiple access schemes permit the available sub-carriers to be allocated among several users.
SC-FDMA technology employs specially formed OFDM signals, and is therefore often called “pre-coded OFDM” technology. Although similar in many respects to conventional OFDMA technology, SC-FDMA signals offer a reduced peak-to-average power ratio (PAPR) compared to OFDMA signals, thus allowing transmitter power amplifiers to be operated more efficiently. This in turn facilitates more efficient usage of a mobile terminal's limited battery resources. (SC-FDMA is described more fully in Myung, et al., “Single Carrier FDMA for Uplink Wireless Transmission,” IEEE Vehicular Technology Magazine, vol. 1, no. 3, September 2006, pp. 30-38.)
LTE link resources are organized into “resource blocks,” defined as time-frequency blocks with a duration of 0.5 milliseconds, corresponding to one “slot”, or half a sub-frame, and encompassing a bandwidth of 180 kHz, corresponding to 12 sub-carriers with a spacing of 15 kHz. Of course, the exact definition of a resource block may vary between LTE and similar systems, and the inventive methods and apparatus described herein are not limited to the numbers used herein. In general, however, resource blocks may be dynamically assigned to mobile terminals, and may be assigned independently for the uplink and the downlink. Depending on a mobile terminal's data throughput needs, the system resources allocated to it may be increased by allocating resource blocks across several sub-frames, or across several frequency blocks, or both. Thus, the instantaneous bandwidth allocated to a mobile terminal in a scheduling process may be dynamically adapted to respond to changing conditions.
LTE also employs multiple modulation formats, including at least QPSK, 16-QAM, and 64-QAM, as well as advanced coding techniques, so that data throughput may be optimized for any of a variety of signal conditions. Depending on the signal conditions and the desired data rate, a suitable combination of modulation format, coding scheme, and bandwidth is chosen, generally to maximize the system throughput. Power control is also employed to ensure acceptable bit error rates while minimizing interference between cells.
Efficient utilization of the air interfaces is a key goal of the LTE developers. An important advantage of OFDM technologies is the flexibility with which resources may be allocated, or “scheduled”, among multiple users. Theoretically, sub-carriers may be allocated by a base station to mobile terminals on an individual basis or in groups; in practice, allocations are typically made on a resource block basis. A variety of scheduling algorithms have been proposed for solving the problem of simultaneously serving multiple users in LTE systems. In general terms, scheduling algorithms are used as an alternative to first-come-first-served queuing and transmission of data packets. As is well known to those skilled in the art, simple scheduling algorithms include round-robin, fair queuing, and proportionally fair scheduling. If differentiated or guaranteed quality of service is offered, as opposed to best-effort communication, weighted fair queuing may be utilized.
Channel-dependent scheduling may be used to take advantage of favorable channel conditions to increase throughput and system spectral efficiency. For example, in an OFDM system, channel quality indicator (CQI) reports, which typically indicate the signal-to-noise ratio (SNR) or signal-to-noise-plus-interference ratio (SINR) measured or estimated for a given channel, may be used in channel-dependent resource allocation schemes. The simplest scheme, conceptually, is to select a mobile terminal having a highest priority, whether based on fairness, quality-of-service guarantees, or other decision metric, and to allocate some number of sub-channels with the highest measured or estimated SINRs to the selected mobile terminal. This approach exploits the frequency diversity inherent to a multi-user OFDM system. Since different mobile terminals observe different frequency-dependent fading profiles, channel-dependent scheduling tends to allocate portions of the overall available bandwidth in a more efficient manner than arbitrary allocation of bandwidth chunks.