Third generation (3G) mobile radio systems, such as, for instance, universal mobile telecommunication systems (UMTS) standardized within the third generation partnership project (3GPP) have been based on wideband code division multiple access (WCDMA) radio access technology. Today, 3G systems are being deployed on a broad scale all around the world. After enhancing this technology by introducing high-speed downlink packet access (HSDPA) and an enhanced uplink, also referred to as high-speed uplink packet access (HSUPA), the next major step in evolution of the UMTS standard has brought the combination of orthogonal frequency division multiplexing (OFDM) for the downlink and single carrier frequency division multiplexing access (SC-FDMA) for the uplink. This system has been named long term evolution (LTE) since it has been intended to cope with future technology evolutions.
The LTE system represents efficient packet based radio access and radio access networks that provide full IP-based functionalities with low latency and low cost. The downlink will support data modulation schemes QPSK, 16-QAM, and 64-QAM and the uplink will support QPSK, 16QAM, and at least for some devices also 64-QAM, for physical data channel transmissions. The term “downlink” denotes direction from the network to the terminal. The term “uplink” denotes direction from the terminal to the network.
LTE's network access is to be extremely flexible, using a number of defined channel bandwidths between 1.4 and 20 MHz, compared with UMTS terrestrial radio access (UTRA) fixed 5 MHz channels. Spectral efficiency is increased by up to four-fold compared with UTRA, and improvements in architecture and signaling reduce round-trip latency. Multiple Input/Multiple Output (MIMO) antenna technology should enable 10 times as many users per cell as 3GPP's original WCDMA radio access technology. To suit as many frequency band allocation arrangements as possible, both paired (frequency division duplex FDD) and unpaired (time division duplex TDD) band operation is supported. LTE can co-exist with earlier 3GPP radio technologies, even in adjacent channels, and calls can be handed over to and from all 3GPP's previous radio access technologies.
The overall architecture of an LTE network is shown in FIG. 1 and a more detailed representation of the E-UTRAN architecture is given in FIG. 2.
As can be seen in FIG. 1, the LTE architecture supports interconnection of different radio access networks (RAN) such as UTRAN or GERAN (GSM EDGE Radio Access Network), which are connected to the EPC via the Serving GPRS Support Node (SGSN). In a 3GPP mobile network, the mobile terminal 110 (called User Equipment, UE, or device) is attached to the access network via the Node B (NB) in the UTRAN and via the evolved Node B (eNB) in the E-UTRAN access. The NB and eNB 120 entities are known as base station in other mobile networks. There are two data packet gateways located in the EPS for supporting the UE mobility—Serving Gateway (SGW) 130 and Packet Data Network Gateway 160 (PDN-GW or shortly PGW). Assuming the E-UTRAN access, the eNB entity 120 may be connected through wired lines to one or more SGWs via the S1-U interface (“U” stays for “user plane”) and to the Mobility Management Entity 140 (MME) via the S1-MMME interface. The SGSN 150 and MME 140 are also referred to as serving core network (CN) nodes.
As anticipated above and as depicted in FIG. 2, the E-UTRAN consists of eNodeB 120, providing the E-UTRA user plane (PDCP/RLC/MAC/PHY) and control plane (RRC) protocol terminations towards the user equipment (UE). The eNodeB 120 hosts the Physical (PHY), Medium Access Control (MAC), Radio Link Control (RLC), and Packet Data Control Protocol (PDCP) layers that include the functionality of user-plane header-compression and encryption. It also offers Radio Resource Control (RRC) functionality corresponding to the control plane. It performs many functions including radio resource management, admission control, scheduling, enforcement of negotiated uplink Quality of Service (QoS), cell information broadcast, ciphering/deciphering of user and control plane data, and compression/decompression of downlink/uplink user plane packet headers. The eNodeBs are interconnected with each other by means of the X2 interface.
The eNodeBs 120 are also connected by means of the S1 interface to the EPC (Evolved Packet Core), more specifically to the MME (Mobility Management Entity) by means of the S1-MME and to the Serving Gateway (SGW) by means of the S1-U. The S1 interface supports a many-to-many relation between MMEs/Serving Gateways and eNodeBs 120. The SGW routes and forwards user data packets, while also acting as the mobility anchor for the user plane during inter-eNodeB handovers and as the anchor for mobility between LTE and other 3GPP technologies (terminating S4 interface and relaying the traffic between 2G/3G systems and PDN GW). For idle state user equipments, the SGW terminates the downlink data path and triggers paging when downlink data arrives for the user equipment. It manages and stores user equipment contexts, e.g. parameters of the IP bearer service, network internal routing information. It also performs replication of the user traffic in case of lawful interception.
The MME 140 is the key control-node for the LTE access-network. It is responsible for idle mode user equipment tracking and paging procedure including retransmissions. It is involved in the bearer activation/deactivation process and is also responsible for choosing the SGW for a user equipment at the initial attach and at time of intra-LTE handover involving Core Network (CN) node relocation. It is responsible for authenticating the user (by interacting with the HSS). The Non-Access Stratum (NAS) signaling terminates at the MME and it is also responsible for generation and allocation of temporary identities to user equipments. It checks the authorization of the user equipment to camp on the service provider's Public Land Mobile Network (PLMN) and enforces user equipment roaming restrictions. The MME is the termination point in the network for ciphering/integrity protection for NAS signaling and handles the security key management. Lawful interception of signaling is also supported by the MME. The MME also provides the control plane function for mobility between LTE and 2G/3G access networks with the S3 interface terminating at the MME from the SGSN. The MME also terminates the S6a interface towards the home HSS for roaming user equipments.
FIGS. 3 and 4 illustrate the structure of a component carrier in the LTE release 8. The downlink component carrier of a 3GPP LTE Release 8 is subdivided in the time-frequency domain in so-called subframes, each of which is divided into two downlink slots as shown in FIG. 3. A downlink slot corresponding to a time period Tslot is shown in detail in FIGS. 3 and 4 with the reference numeral 320. The first downlink slot of a subframe comprises a control channel region (PDCCH region) within the first OFDM symbol(s). Each subframe consists of a give number of OFDM symbols in the time domain (12 or 14 OFDM symbols in 3GPP LTE (Release 8)), wherein each OFDM symbol spans over the entire bandwidth of the component carrier.
In particular, the smallest unit of resources that can be assigned by a scheduler is a resource block also called physical resource block (PRB). With reference to FIG. 4, a PRB 330 is defined as NsymbDL consecutive OFDM symbols in the time domain and NscRB consecutive sub-carriers in the frequency domain. In practice, the downlink resources are assigned in resource block pairs. A resource block pair consists of two resource blocks. It spans NscRB consecutive sub-carriers in the frequency domain and the entire 2·NsymbDL modulation symbols of the subframe in the time domain. NsymbDL may be either 6 or 7 resulting in either 12 or 14 OFDM symbols in total. Consequently, a physical resource block 330 consists of NsymbDL×NscRB resource elements corresponding to one slot in the time domain and 180 kHz in the frequency domain (further details on the downlink resource grid can be found, for example, in 3GPP TS 36.211, “Evolved universal terrestrial radio access (E-UTRA); physical channels and modulations (Release 10)”, version 10.4.0, 2012, Section 6.2, freely available at www.3gpp.org, which is incorporated herein by reference). While it can happen that some resource elements within a resource block or resource block pair are not used even though it has been scheduled, for simplicity of the used terminology still the whole resource block or resource block pair is assigned. Examples for resource elements that are actually not assigned by a scheduler include reference signals, broadcast signals, synchronization signals, and resource elements used for various control signal or channel transmissions.
The number of physical resource blocks NRBDL in downlink depends on the downlink transmission bandwidth configured in the cell and is at present defined in LTE as being from the interval of 6 to 110 (P)RBs. It is common practice in LTE to denote the bandwidth either in units of Hz (e.g. 10 MHz) or in units of resource blocks, e.g. for the downlink case the cell bandwidth can equivalently expressed as e.g. 10 MHz or NRBDL=50 RB.
A channel resource may be defined as a “resource block” as exemplary illustrated in FIG. 3 where a multi-carrier communication system, e.g. employing OFDM as for example discussed in the LTE work item of 3GPP, is assumed. More generally, it may be assumed that a resource block designates the smallest resource unit on an air interface of a mobile communication that can be assigned by a scheduler. The dimensions of a resource block may be any combination of time (e.g. time slot, subframe, frame, etc. for time division multiplex (TDM)), frequency (e.g. subband, carrier frequency, etc. for frequency division multiplex (FDM)), code (e.g. spreading code for code division multiplex (CDM)), antenna (e.g. Multiple Input Multiple Output (MIMO)), etc. depending on the access scheme used in the mobile communication system.
The data are mapped onto physical resource blocks by means of pairs of virtual resource blocks. A pair of virtual resource blocks is mapped onto a pair of physical resource blocks. The following two types of virtual resource blocks are defined according to their mapping on the physical resource blocks in LTE downlink: Localised Virtual Resource Block (LVRB) and Distributed Virtual Resource Block (DVRB). In the localised transmission mode using the localised VRBs, the eNB has full control which and how many resource blocks are used, and should use this control usually to pick resource blocks that result in a large spectral efficiency. In most mobile communication systems, this results in adjacent physical resource blocks or multiple clusters of adjacent physical resource blocks for the transmission to a single user equipment, because the radio channel is coherent in the frequency domain, implying that if one physical resource block offers a large spectral efficiency, then it is very likely that an adjacent physical resource block offers a similarly large spectral efficiency. In the distributed transmission mode using the distributed VRBs, the physical resource blocks carrying data for the same UE are distributed across the frequency band in order to hit at least some physical resource blocks that offer a sufficiently large spectral efficiency, thereby obtaining frequency diversity.
In 3GPP LTE Release 8 the downlink control signalling is basically carried by the following three physical channels:                Physical control format indicator channel (PCFICH) for indicating the number of OFDM symbols used for control signalling in a subframe (i.e. the size of the control channel region);        Physical hybrid ARQ indicator channel (PHICH) for carrying the downlink ACK/NACK associated with uplink data transmission; and        Physical downlink control channel (PDCCH) for carrying downlink scheduling assignments and uplink scheduling assignments.        
The PCFICH is sent from a known position within the control signalling region of a downlink subframe using a known pre-defined modulation and coding scheme. The user equipment decodes the PCFICH in order to obtain information about a size of the control signalling region in a subframe, for instance, the number of OFDM symbols. If the user equipment (UE) is unable to decode the PCFICH or if it obtains an erroneous PCFICH value, it will not be able to correctly decode the L1/L2 control signalling (PDCCH) comprised in the control signalling region, which may result in losing all resource assignments contained therein.
The PDCCH carries control information, such as, for instance, scheduling grants for allocating resources for downlink or uplink data transmission. The PDCCH for the user equipment is transmitted on the first of either one, two or three OFDM symbols according to PCFICH within a subframe.
Physical downlink shared channel (PDSCH) is used to transport user data. PDSCH is mapped to the remaining OFDM symbols within one subframe after PDCCH. The PDSCH resources allocated for one UE are in the units of resource block for each subframe.
Physical uplink shared channel (PUSCH) carries user data. Physical Uplink Control Channel (PUCCH) carries signalling in the uplink direction such as scheduling requests, HARQ positive and negative acknowledgements in response to data packets on PDSCH, and channel state information (CSI).
The frequency spectrum for IMT-Advanced was decided at the World Radio-communication Conference 2007 (WRC-07). Although the overall frequency spectrum for IMT-Advanced was decided, the actual available frequency bandwidth is different according to each region or country. Following the decision on the available frequency spectrum outline, however, standardization of a radio interface started in the 3rd Generation Partnership Project (3GPP). At the 3GPP TSG RAN #39 meeting, the Study Item description on “Further Advancements for E-UTRA (LTE-Advanced)” was approved. The study item covers technology components to be considered for the evolution of E-UTRA, e.g. to fulfill the requirements on IMT-Advanced.
The bandwidth that the LTE-Advanced system is able to support is 100 MHz, while an LTE system can only support 20 MHz. Nowadays, the lack of radio spectrum has become a bottleneck of the development of wireless networks, and as a result it is difficult to find a spectrum band which is wide enough for the LTE-Advanced system. Consequently, it is urgent to find a way to gain a wider radio spectrum band, wherein a possible answer is the carrier aggregation functionality. In carrier aggregation, two or more component carriers (component carriers) are aggregated in order to support wider transmission bandwidths up to 100 MHz. The term “component carrier” refers to a combination of several resource blocks. In future releases of LTE, the term “component carrier” is no longer used; instead, the terminology is changed to “cell”, which refers to a combination of downlink and optionally uplink resources. The linking between the carrier frequency of the downlink resources and the carrier frequency of the uplink resources is indicated in the system information transmitted on the downlink resources. Several cells in the LTE system are aggregated into one wider channel in the LTE-Advanced system which is wide enough for 100 MHz even though these cells in LTE are in different frequency bands. All component carriers can be configured to be LTE Rel. 8/9 compatible, at least when the aggregated numbers of component carriers in the uplink and the downlink are the same. Not all component carriers aggregated by a user equipment may necessarily be Rel. 8/9 compatible. Existing mechanism (e.g. barring) may be used to avoid Rel-8/9 user equipments to camp on a component carrier. A user equipment may simultaneously receive or transmit one or multiple component carriers (corresponding to multiple serving cells) depending on its capabilities. A LTE-A Rel. 10 user equipment with reception and/or transmission capabilities for carrier aggregation can simultaneously receive and/or transmit on multiple serving cells, whereas an LTE Rel. 8/9 user equipment can receive and transmit on a single serving cell only, provided that the structure of the component carrier follows the Rel. 8/9 specifications.
The principle of link adaptation is fundamental to the design of a radio interface which is efficient for packet-switched data traffic. Unlike the early versions of UMTS (Universal Mobile Telecommunication System), which used fast closed-loop power control to support circuit-switched services with a roughly constant data rate, link adaptation in LTE adjusts the transmitted data rate (modulation scheme and channel coding rate) dynamically to match the prevailing radio channel capacity for each user.
For the downlink data transmissions in LTE, the eNodeB typically selects the modulation scheme and code rate (MCS) depending on a prediction of the downlink channel conditions. An important input to this selection process is the Channel State Information (CSI) feedback transmitted by the User Equipment (UE) in the uplink to the eNodeB.
Channel state information is used in a multi-user communication system, such as for example 3GPP LTE to determine the quality of channel resource(s) for one or more users. In general, in response to the CSI feedback the eNodeB can select between QPSK, 16-QAM and 64-QAM schemes and a wide range of code rates. This CSI information may be used to aid in a multi-user scheduling algorithm to assign channel resources to different users, or to adapt link parameters such as modulation scheme, coding rate or transmit power, so as to exploit the assigned channel resources to its fullest potential.
The CSI is reported for every component carrier, and, depending on the reporting mode and bandwidth, for different sets of subbands of the component carrier. A channel resource may be defined as a “resource block” as exemplary illustrated in FIG. 4 where a multi-carrier communication system, e.g. employing OFDM as for example discussed in the LTE work item of 3GPP, is assumed. More generally, it may be assumed that a resource block designates the smallest resource unit on an air interface of a mobile communication that can be assigned by a scheduler. The dimensions of a resource block may be any combination of time (e.g. time slot, subframe, frame, etc. for time division multiplex (TDM)), frequency (e.g. subband, carrier frequency, etc. for frequency division multiplex (FDM)), code (e.g. spreading code for code division multiplex (CDM)), antenna (e.g. Multiple Input Multiple Output (MIMO)), etc. depending on the access scheme used in the mobile communication system.
Assuming that the smallest assignable resource unit is a resource block, in the ideal case channel quality information for each and all resource blocks and each and all users should be always available. However, due to constrained capacity of the feedback channel this is most likely not feasible or even impossible. Therefore, reduction or compression techniques are required so as to reduce the channel quality feedback signalling overhead, e.g. by transmitting channel quality information only for a subset of resource blocks for a given user.
In 3GPP LTE, the smallest unit for which channel quality is reported is called a subband, which consists of multiple frequency-adjacent resource blocks.
Accordingly, the resource grants are transmitted from the eNodeB to the UE in downlink control information (DCI) via PDCCH. The downlink control information may be transmitted in different formats, depending on the signaling information necessary. In general, the DCI may include:                a resource block assignment (RBA), and        a modulation and coding scheme (MCS).        
The DCI may include further information, depending on the signaling information necessary, as also described in Section 9.3.2.3 of the book “LTE: The UMTS Long Term Evolution from theory to practice” by S. Sesia, I. Toufik, M. Baker, April 2009, John Wiley & Sons, ISBN 978-0-470-69716-0, which is incorporated herein by reference. For instance, the DCI may further include HARQ related information such as redundancy version (RV), HARQ process number, or new data indicator (NDI); MIMO related information such as pre-coding; power control related information, etc. Other channel quality elements may be the Precoding Matrix Indicator (PMI) and the Rank Indicator (RI). Details about the involved reporting and transmission mechanisms are given in the following specifications to which it is referred for further reading (all documents available at http://www.3gpp.org and incorporated herein by reference):                3GPP TS 36.211, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical channels and modulation”, version 10.0.0, particularly sections 6.3.3, 6.3.4,        3GPP TS 36.212, “Evolved Universal Terrestrial Radio Access (E-UTRA); Multiplexing and channel coding”, version 10.0.0, particularly sections 5.2.2, 5.2.4, 5.3.3,        3GPP TS 36.213, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical layer procedures”, version 10.0.1, particularly sections 7.1.7, and 7.2.        
The resource block assignment specifies the physical resource blocks which are to be used for the transmission in uplink or downlink.
The modulation and coding scheme defines the modulation scheme employed for the transmission such as QPSK, 16-QAM or 64-QAM. The lower the order of the modulation, the more robust is the transmission. Thus, higher-order modulations, such as 64-QAM, are typically used when the channel conditions are good. The modulation and coding scheme also defines a code rate for a given modulation, i.e. the number of information bits carried in a predefined resource. The code rate is chosen depending on the radio link conditions: a lower code rate can be used in poor channel conditions and a higher code rate can be used in the case of good channel conditions. “Good” and “bad” here is used in terms of the signal to noise and interference ratio (SINR). The finer adaptation of the code rate is achieved by puncturing or repetition of the generic rate depending on the error correcting coder type.
FIG. 6 shows an example of an MCS table used in LTE release 11 to determine the modulation order (Qm) used in the physical downlink shared channel. The levels between 0 and 9 in downlink usually represent employing of the robust QPSK modulation. In uplink, LTE release 11 foresees an MCS table which essentially has the same structure of the MCS table for the downlink channel. In downlink the QPSK modulation scheme is represented by the MCS levels between 0 and 9 (for more details refer to 3GPP TS 36.213, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical layer procedures”, version 11.1.0, sections 7 and 8, respectively and in particular Tables 7.1.7.1-1 for downlink and 8.6.1-1 for uplink). The remaining levels specify configurations with higher-level modulation schemes. The levels in the MCS table corresponding to the higher indexes (17 to 28) represent the 64-QAM modulation scheme. The QPSK and 16-QAM modulation schemes are also indicated as low-order modulation schemes when compared to the 64-QAM modulation scheme. In general, the term “lower-order modulation scheme” is to be understood as any modulation order lower than the highest supported modulation order.
The first column of the MCS table defines an index which is actually signaled, for instance in the DCI, in order to provide a setting for modulation and coding scheme. The second column of the MCS table provides the order of the modulation associated with the index, according to which order 2 means QPSK, order 4 means 16-QAM and order 6 means 64-QAM. The third column of the table includes transport block size index which refers to predefined sizes of transport blocks and thus also to a coding rate (amount of redundancy added to the data). The transport block size (TBS) index in the third column of the MCS table refers to a TBS table (cf. for instance, Table 7.1.7.2.1-1 in the 3GPP TS 36.213, cited above), which includes rows with a first column corresponding to the number of the TBS index and the following columns specifying the transport block sizes for the respective numbers of resource blocks, which are signaled in the DCI and in particular in the resource block allocation (RBA) part thereof.
Transport block is a data unit which includes data to be transmitted and which are provided for the transmission by the higher layers, i.e. mapped onto the physical resources in accordance with the control information including scheduling information and/or according to the settings by the higher layers. Transport blocks are mapped on the respective resource blocks, i.e. in general onto fixed-size time slots (time domain portions).
In the coming years, operators will begin deploying a new network architecture termed Heterogeneous Networks (HetNet). A typical HetNet deployment as currently discussed within 3GPP consists of macro and pico cells. Pico cells are formed by low power eNBs that may be advantageously placed at traffic hotspots in order to offload traffic from macro cells. Macro and pico eNBs implement the scheduling independently from each other. The mix of high power macro cells and low power pico cells can provide additional capacity and improved coverage.
Generally a terminal, such as a user equipment (UE), connects to the node with the strongest downlink signal. In FIG. 5A, the area surrounding the low power eNBs and delimited by a solid line edge is the area where the downlink signal of the low power eNB is the strongest. User equipments within this area will connect to the appropriate low power eNB.
In order to expand the uptake area of a low power eNB without increasing its transmission power an offset is added to the received downlink signal strength in the cell-selection mechanism. In this manner the low power eNB can cover a larger uptake area or in other words the Pico Cells are provided with cell rage expansion (CRE). CRE is a means to increase the throughput performance in such deployments. A UE connects to a macro eNB only if the received power is at least G dB larger than the received power from the strongest pico eNB, where G is the semi-statically configured CRE bias. Typical values are expected to range from 0 to 20 dB.
FIG. 5A illustrates such a HetNet scenario where various pico cells are provided in the area of one macro cell. The range expansion zone (CRE) is delimited in FIG. 5A by a dashed edge. The pico cell edge without CRE is delimited by a solid line edge. Various UEs are shown located in the various cells. FIG. 5B schematically illustrates the concept of a HetNet scenario including a macro eNB and a plurality of pico eNB serving respectively a plurality of UEs located in their coverage areas.
A heterogeneous deployment with a range expansion in the range of 3 to 4 dB has been already considered in the LTE release 8. Nevertheless, the applicability of CRE with cell selection offsets of up to 9 dB have currently being considered at RAN1. However, the additional capacity provided by the smaller cells may be lost due to signal interference experienced by the UEs in the pico cells. The macro eNB is the single dominant interferer for pico UEs, i.e. for UEs being connected to the pico eNB. This is especially true for pico UEs at the cell edge when using CRE.
Cell-edge users served by a pico eNodeB usually have relatively low received signal strength, especially if they are located at the border of a pico cell with CRE and suffer from strong intercell interference. The major interferer is the eNodeB serving the macro cell in the Heterogeneous Network, which usually transmits subframes at a high transmission power.
In order to improve the throughput performance of cell-edge mobile terminals, the interference impact has to be reduced on the resource on which these mobile terminals are scheduled for downlink transmission. The objective of Inter-Cell Interference Coordination (ICIC) is to maximize the multi-cell throughput subject to power constraints, inter-cell signaling limitations, fairness objectives and minimum bit rate requirements.
FIG. 7 shows an exemplary downlink transmission scenario in which two UEs are served by an eNB. Depending on the SINR level on transmission resources, high or low order modulation schemes can be used for data transmissions. The set of currently supported modulation schemes in LTE consists of QPSK, 16-QAM and 64-QAM.
The modulation and coding scheme (MCS) that is used for transmissions of physical downlink shared channels (PDSCH) transmissions is indicated by the MCS field within the downlink control information (DCI). The current Rel-11 MCS field has a fixed length of five bits. This results in 32 code points that are used for indicating 32 combinations of modulation scheme and code rate of the channel coder. The code rate is determined by the transport block size that is mapped onto a set of allocated resource blocks (RBs).
The interpretation of the MCS field code points is given by the specified MCS table. The table maps each code points described as MCS index to a combination of modulation order and transport block size (TBS) index. The modulation order describes the number of bits that are mapped onto a single modulation symbol. The current Release-11 table supports modulation order 2, 4 and 6 which corresponds to QPSK, 16-QAM and 64-QAM. The TBS index is linked to an entry of the TBS table which contains a transport block size depending in the number of allocated RBs. Each TBS index corresponds therefore a certain spectral efficiency in terms of bits transmitted per RB.
The current Release-11 MCS table is shown in FIG. 6. It can be seen that the table contains three entries without TBS index. These MCS indices are used for retransmissions of erroneous transport blocks. The indication of the transport block size is not required in this case since the size is known from the initial transmission. Each MCS index corresponds to a certain SINR level at which the combination of modulation scheme and code rate that is determined by the transport block size can be used without exceeding a certain block error probability. Assuming a block error probability of 0.1, the current Release-11 table approximately covers the SINR range between −7 dB and 20 dB; the MCS table supports 27 TBS indices, and increasing the TBS index by one corresponds approximately to an SINR level difference of 1 dB.
FIG. 8 shows the RB SINR level distributions of two typical UE within a heterogeneous network deployment as evaluated during performance studies for Release-11. The results have been achieved by means of system level distributions and the curves correspond to a cell-center UE with very high average SINR level and a cell-edge UE with very low average SINR level. From FIG. 8 it can be seen that a large fraction of SINR samples of the cell-center UE is not covered by the current Rel-11 MCS table.