The Long-Term Evolution (LTE) wireless communication system specified by the 3rd-Generation Partnership Project (3GPP) uses orthogonal frequency-division multiplexing (OFDM) in the downlink and discrete-Fourier-transform-spread OFDM in the uplink. The basic LTE downlink physical resource can thus be seen as a time-frequency grid. This is illustrated in FIG. 1, where each resource element corresponds to one OFDM subcarrier during one OFDM symbol interval.
In the time domain, LTE downlink transmissions are organized into radio frames of 10 milliseconds, each radio frame consisting of ten equally-sized subframes of length Tsubframe=1 millisecond. The LTE frame structure is illustrated in FIG. 2.
Furthermore, the resource allocation in LTE is typically described in terms of resource blocks, where a resource block corresponds to one slot (0.5 milliseconds) in the time domain and 12 contiguous subcarriers in the frequency domain. Resource blocks are numbered in the frequency domain, starting with 0 from one end of the system bandwidth.
Earlier versions of the LTE standard, e.g. Release 8 and 9, support bandwidths up to 20 MHz. However, in order to meet the IMT-Advanced requirements, 3GPP has initiated work on LTE Release 10. One of the goals of LTE Release 10 is to support bandwidths larger than 20 MHz. However, one important requirement on LTE Release 10 is to assure backward compatibility with earlier versions of the standard. This backwards compatibility should also include spectrum compatibility. As a result, a LTE Release 10 carrier wider than 20 MHz should appear as a number of distinct LTE carriers to a legacy terminal, e.g. an LTE Release 8 or Release 9 terminal. Each such carrier can be referred to as a Component Carrier.
In particular for early LTE Release 10 deployments, it can be expected that there will be a smaller number of LTE Release 10-capable terminals compared to many LTE legacy terminals. Therefore, it is necessary to assure an efficient use of a wide carrier also for legacy terminals, i.e., that it is possible to implement carriers in such a manner that legacy terminals can be scheduled in all parts of the wideband LTE Release 10 carrier. The most straightforward way to obtain this would be by means of “carrier aggregation.” Carrier aggregation implies that an LTE Release 10 terminal can receive multiple component carriers, where the component carriers have, or at least have the possibility to have, the same structure as a Release 8 carrier. The same structure as Release 8 implies that all Release 8 signals, e.g. primary and secondary synchronization signals, reference signals, and system information are transmitted on each carrier. Carrier aggregation is illustrated generally in FIG. 3.
During initial access, a carrier-aggregation capable terminal, e.g. a LTE Release 10 terminal, behaves similarly to a legacy terminal. Upon successful connection to the network, via a first carrier, a terminal may—depending on its own capabilities and the network—be configured with additional component carriers in the uplink and/or downlink. Configuration of these carriers is based on Radio Resource Control (RRC). Due to the heavy signaling and rather slow speed of RRC signaling, it is envisioned that a terminal may often be configured with multiple component carriers even though not all of them are used at a given instant. If a terminal is configured on multiple component carriers this implies it has to monitor all downlink component carriers for the corresponding Physical Downlink Control Channel (PDCCH) and Physical Downlink Shared Channel (PDSCH). This implies that a wider receiver bandwidth, higher sampling rates, etc., must generally be active, resulting in high power consumption for the mobile terminal.
To mitigate these problems, LTE Rel-10 supports a component carrier activation procedure, in addition to the configuration procedures. Accordingly, the terminal monitors only configured and activated component carriers for PDCCH and PDSCH. Since activation of component carriers is based on Medium Access Control (MAC) control elements—which are faster than RRC signaling—activation/de-activation can follow the number of component carriers that is required to fulfil the current data rate needs. Upon arrival of large data amounts, multiple component carriers are activated, used for data transmission, and then de-activated if not needed anymore. All but one component carrier, the downlink Primary component carrier (DL PCC), can be de-activated. Note that the PCC is not necessarily the same for all terminals in the cell, i.e. different terminals may be configured with different Primary component carriers. Activation therefore provides the possibility to configure multiple component carriers but only activate them on an as-needed basis. Most of the time a terminal would have one or very few component carriers activated, resulting in a lower reception bandwidth and thus lower battery consumption.
Scheduling of a component carrier is done on the PDCCH via downlink assignments. Control information on the PDCCH is formatted as a Downlink Control Information (DCI) message. In Release 8, a terminal only operates with one downlink and one uplink component carrier. As a result, the associations between downlink assignment, uplink grants and the corresponding downlinks and uplinks component carriers are clear.
In Release 10, however, two modes of carrier aggregation need to be distinguished. The first case is very similar to the operation of multiple Release 8 or 9 terminals. In this mode a downlink assignment or uplink grant contained in a DCI message transmitted on a component carrier is either valid for the downlink component carrier itself or for a corresponding uplink component carrier. The association of uplink and downlink component carriers with one another can be cell-specific or UE-specific linking. In a second mode of operation, a DCI message is augmented with an indicator that specifies a component carrier, the Carrier Indicator Field (CIF). A DCI containing a downlink assignment with CIF is valid for the downlink component carrier indicated with the CIF. Likewise, a DCI containing an uplink grant with CIF is valid for the indicated uplink component carrier. This is referred to as cross-carrier scheduling.
It should be noted that the inventive techniques disclosed herein are not restricted to the particular terminology used here. It also should be noted that during the development of the standards for carrier aggregation in LTE, various terms have been used to describe, for example, component carriers. Those skilled in the art will appreciate, then, that the techniques of the present disclosure are therefore applicable to systems and situations where terms like multi-cell or dual-cell operation are used. In this disclosure, the term “primary serving cell” or “PCell” refers to a cell configured on a primary component carrier, PCC. A user equipment which is capable of carrier aggregation may, in addition to the PCell, also aggregate one or more secondary serving cells, “SCells”. The SCells are cells configured on secondary component carriers, SCCs. Note that “cell” in this context refers to a network object, whereas “component carrier” or “carrier” refers to the physical resource, i.e. frequency band, that the cell is configured to use.
In the subsequent discussions, a basic heterogeneous network deployment scenario with two cell layers, here referred to as “macro layer” and “pico layer”, respectively, is assumed. No specific assumptions are made regarding the characteristics of the different layers except that they correspond to cells of substantially different size of their respective coverage area, fundamentally defined by the coverage area of the basic control signals/channels, such as Primary Synchronization Channel, (PSS), Secondary Synchronization Channel (SSS), Physical Broadcast Channel (PBCH), Cell Specific Reference Signals (CRS), PDCCH, etc. Especially, what is referred to herein as a “pico layer” can be a micro layer, a conventional outdoor or indoor pico layer, a layer consisting of relays, or a Home e-Node B (HeNB) layer.
Various inter-cell interference scenarios can be anticipated for co-channel heterogeneous network deployments. FIG. 4 illustrates three scenarios that may cause severe interference. Cases (a) and (b) involve an HeNB operating in Closed Subscriber Group (CSG) mode. In the CSG mode, access to the HeNB is granted only to those subscribers that are members of a Closed Subscriber Group associated with the HeNB. The left-hand side of FIG. 4 illustrates how a HeNB in a femto cell causes interference towards a macro cell user that has no access to the femto cell (case (a)), and how a macro cell edge user with no access to a particular femto cell may cause interference towards the HeNB (case (b)). Inter-cell interference is indicated by the dotted arrows.
The right-hand side of FIG. 4, case (c), illustrates how the interference from a macro evolved-Node B (eNB) towards a pico or femto cell edge user increases, up to Δ, if path-loss-based serving-cell selection is used instead of selection based on the strongest received downlink signal. The solid and dotted lines illustrate Rx power, and the dashed lines show 1/pathloss. To understand why this increase in interference occurs, assume that the user equipment is in close proximity to the pico base station, but far away from the macro eNB. If the UE performs path-loss based cell selection the foot print of the pico eNB increases, i.e. the UE connects to the pico eNB where otherwise, using received signal power-based cell selection, it would have connected to the macro eNB since the received power is stronger. This implies that interfering signals from the macro eNB are stronger than desired signals from the pico eNB. On the uplink, however, the situation improves since the UE connects to that eNB to which it sees the lowest pathloss and thus the received power at the eNB is maximized.
The worst inter-cell interference issue in co-channel heterogeneous network deployments in LTE arise with respect to resources that cannot benefit from inter-cell interference coordination (ICIC). For schedulable data transmissions, such as PDSCH and Physical Uplink Shared Channel (PUSCH), inter-cell interference can be mitigated through inter-cell coordination, such as by via soft or hard physical resource partitioning. Coordination information can be exchanged across layers/cells via X2 interfaces, the standard interfaces between LTE radio base stations (eNBs). However, ICIC is not possible for signals that need to be transmitted on specific resources, e.g. parts of system information.
It is desirable that legacy mobile terminals (user equipments, or UEs, in 3GPP terminology) can operate and benefit from heterogeneous network deployments, such as by accessing any available pico layers to improve uplink performance, even when the received signal power from the macro layer is significantly higher. Such cell selection can be achieved, for example, by use of an offset applied to Reference Signal Received Power (RSRP) measurements carried out by the UE (corresponding to Δ in FIG. 4). The current specification allows for an offset up to 24 dB, which should be sufficient for most heterogeneous network scenarios.
To mitigate severe downlink inter-cell interference from macro eNBs towards control regions of pico subframes, operating layers on different carriers appears to be the only option to ensure robust communications for legacy mobile terminals in heterogeneous network deployments. This implies that the whole system bandwidth will not always be available for legacy mobile terminals and may result in reduced user throughputs. One example of reduced throughput would be a split of a contiguous system bandwidth of 20 MHz into two carries, e.g. 10 MHz bandwidth on each carrier.
As pointed out above, operating different layers on different non-overlapping carrier frequencies may lead to resource-utilization inefficiency. With the heterogeneous network illustration depicted in FIG. 5, this would imply that the overall available spectrum consists of two carriers f1 and f2, with f1 and f2 being exclusively used in the macro and pico layers, respectively. In the subsequent discussions, it is assumed that the layers are synchronized with time aligned eNB transmissions and that f1 and f2 have non-overlapping frequency bands.
In many cases it can be assumed that the pico layer is deployed to carry the main part of the traffic, and especially, to provide the highest data rates, while the macro layer provides full-area coverage, i.e., to fill any coverage holes within the pico layer. In such a case, it is desirable that the full bandwidth, corresponding to carriers f1 and f2, is available for data transmission within the pico layer. One can also envision cases when it is desirable, that the full bandwidth (f1 and f2) is available for data transmission also within the macro layer.
As already mentioned, sharing of the resources, i.e. operation on the same set of carriers, between the cell layers for data transmission can be accomplished by means of Inter-Cell Interference Coordination (ICIC) methods that can be more or less dynamic depending on the coordination capabilities between the layers, and their constituent radio base stations. However, interference concerns remain with respect to the transmission of signals and/or channels that cannot rely on traditional ICIC methods but need to be transmitted on specific, well-defined, resources. In LTE, these include, for example, the synchronization signals (PSS/SSS), the Physical Broadcast Channel (PBCH), and the layer1/layer 2 (L1/L2) control channels (PDCCH, PCFICH and PHICH).
Clearly, all these signals must be transmitted on at least one downlink carrier within each cell layer, as they are needed to enable a user equipment to detect, and connect to the cell. The downlink carrier on which these signals are always transmitted will be referred to as the primary carrier, or primary component carrier (PCC) in the following disclosure. It should be noted, however, that these signals may also be transmitted on one or more secondary component carriers, SCCs, and if this is the case, a user equipment may receive the signals either from the PCC, or from an SCC.
For the purposes of discussion, assume that the primary carrier, PCC, corresponds to carrier f1 in the macro layer and carrier f2 in the pico layer.
For the downlink situation, the three cases shown in FIG. 6 are considered below, where Case 1 differ from Case 2 with respect to the use of an Open Subscriber Group (OSG) in the former. In Case 3, both carriers, f1 and f2, are available also at the macro layer.
In Case 1, it is assumed that Carrier f1, which is the macro primary component carrier, or PCC, should be available for PDSCH transmission, i.e. traffic data transmission, also within the pico layer. It is assumed that a mobile terminal only accesses the macro layer when the path loss to the macro layer is of the same order or smaller, compared to the path loss to the pico layer.
In this case, the basic downlink control signals/channels above can be transmitted on f1 also in the pico layer with no severe interference to mobile terminals accessing the macro layer. Thus both f1 and f2 can be deployed as “normal”, release 8 compatible, carriers in the pico layer. However, a legacy mobile terminal would only be able to access f1 close to the pico cell site where the path loss to the pico cell is much smaller than the path-loss to the macro cell, in order to avoid strong control-channel interference from the macro cell. Closer to the pico-cell border of the pico cell, carrier-aggregation capable UE:s, e.g. Release 10 mobile terminals, would need to access on carrier f2, to avoid strong interference to PSS/SSS and PBCH from the macro cell. However, these mobile terminals could be scheduled PDSCH transmissions on f1, using cross-carrier scheduling signaled via the PDCCH on f2. Note that, to avoid interference from cell-specific reference signals (CRS) for the macro layer, pico-cell PDSCH transmission on f1 must rely on UE-specific reference signals (RS) for channel estimation, at least when the UE is close to the pico-cell border. This is because CRS are typically transmitted on specific resources in the data region of a subframe, so that the CRS transmitted on f1 in the macro cell will collide with the CRS transmitted on f1 in the pico cell. One might consider using frequency shifts of CRS across layers, but the macro CRS would then cause interference towards data resource elements of the pico.
In case 2, similarly to case 1, carrier f1 should be available for PDSCH transmission also within the pico layer. However, a mobile terminal should be able to access the macro cell even when close to the pico cell. This scenario may occur when the pico layer consists of HeNBs belonging to Closed Subscriber Groups (CSGs), and when a mobile terminal not belonging to the CSG approaches a HeNB. The mobile terminal will not be allowed access to the HeNB, and must therefore connect to the macro cell instead. In this case, the pico layer must not transmit the channels above (PSS/SSS, PBCH, CRS, PDCCH, etc.) on f1, in order to avoid interference to the mobile terminals that are accessing the macro layer in the vicinity of a pico site. Rather, the corresponding resource elements should be empty, i.e. muted. Thus, legacy mobile terminals can only access the pico layer on f2 while Release 10 mobile terminals can be scheduled on both f1 and f2, in the same way as for case 1.
In Case 3, in addition to carrier f1 being available for PDSCH transmission within the pico layer, carrier f2 should be available for PDSCH transmission within the macro layer.
In this case, the macro layer must not transmit the basic downlink signals/channels above (PSS/SSS, PBCH, CRS, PDCCH, etc.) on f2, in order to avoid interference to mobile terminals that are accessing the pico layer and that may be in a location where signals from the macro layer are received with much higher power, even though the path loss to the pico layer is substantially smaller. Rather, as with case 2, the corresponding resource elements should be empty, i.e. muted. Thus, legacy mobile terminals can only access the macro layer on f1 while carrier-aggregation capable terminals, e.g. Release 10 mobile terminals, can be scheduled in the macro layer on both f1 and f2. It should be noted that a mobile terminal operating in this scenario can only be scheduled on the macro layer on f2 in such a way that it does not cause any severe interference to the pico cell, either by ensuring that there is no mobile terminal being scheduled on the corresponding resource in any pico cell under the coverage area of the macro cell, or by using low power for the macro-cell transmission, where possible.
Note that in the case where all pico cells are relatively far from the macro-cell site, one could transmit also the basic control signals/channels with reduced power on f2 from the macro-cell site. However, this would make the macro-cell on f2 appear as a separate pico cell, located at the same point as the macro cell on f1.
In LTE, the mobile terminals derive the physical cell ID for a cell from the synchronization signals PSS/SSS. Likewise, the number of transmit antenna ports is blindly derived from the CRC scrambling code of the PBCH. As a result, if signals are only transmitted with zero or reduced power on a secondary component carrier, i.e. in an SCell, the UE is unable to determine either the physical cell ID nor the number of transmit antenna ports. The same problem may occur even if the signals are not muted, for instance if a UE is in the vicinity of a pico cell which is interfered by a macro cell transmitting with high power on the same carrier. In this case, the UE may not be able to hear and/or decode the synchronization signals from the pico cell due to the severe interference.
In LTE, the physical cell ID is used to derive uplink demodulation reference signals (DMRS), sounding reference signals (SRS), physical uplink shared channel (PUSCH) scrambling, PDSCH scrambling, physical uplink control channel (PUCCH) signaling, L1/L2 control signaling, reference signals (RS) for transmissions using Multi-Media Broadcast over a Single Frequency Network, etc. Likewise, the number of transmit antenna ports is needed by the mobile terminal in LTE, as it influences the CRS, layer mapping, precoding, L1/L2 control signaling, etc. The CRS, in particular, are needed to perform mobility measurements, if configured on a secondary component carrier.
Thus, if a UE is not able to receive the necessary control and synchronization signals from a cell, it will not be able to detect that cell or establish communication with it, e.g. to perform carrier aggregation, or perform mobility measurements. This may lead to reduced performance. If the UE is not able to aggregate a secondary carrier because it can't detect the SCell, the UE may not be able to use its full bandwidth capacity, leading to lower throughput. If the UE is not able to receive reference signals and perform mobility measurements on a neighboring cell, the UE may end up being served by a less-than-optimal cell, which will reduce performance.