Field of the Invention
The present invention relates to carrier aggregation in cellular communication network systems. In particular, the invention relates to rate capping when multiple schedulers perform carrier aggregation, also known as cell aggregation.
Related Background Art
Prior art which is related to this technical field cart e.g. be found in the following references:    [1] 3GPP TS 36.306 vlO. 3.0; and    [2] Yuanye Wang et al.: “Carrier Load Balancing and Packet Scheduling for Multi-Carrier Systems”, IEEE, May 2010.
The following meanings for the abbreviations used in this specification apply:
2-D Two-dimensional
3GPP The 3rd Generation Partnership Project
BSR Buffer Status Report
BW Bandwidth
CA Carrier Aggregation
CC Component Carrier
CQI Channel Quality Indicator
DL Downlink
DRB Data Radio Bearer
eNB eNode B
FSY Feasibility Study
GBR Guaranteed Bite Rate
HSPA High-Speed Packet Access
IEEE The Institute of Electrical and Electronics Engineers
LI Layer 1
LTE Long Term Evolution
MAC Medium Access Control
MC Multi Carrier
MCS Modulation and Coding Scheme
NSN Nokia Siemens Networks
OLLA Outer Loop Link Adaptation
PCell Primary Cell
PDCCH Physical Downlink Control Channel
PDCP Packet Data Convergence Protocol
PDSCH Physical Downlink Shared Channel
PDU Protocol Data Unit
PF Proportional Fair
PHY Physical
PRB Physical Resource Block
PUSCH Physical Uplink Shared Channel
QAM Quadrature amplitude modulation
Rel Release
RLC Radio Link Control
SC Single Carrier
SCell Secondary Cell
SCH Shared Channel
SV Scheduling Validator
TB Transport Block
TBS Transport Block Size
TM Transmission Mode
TTI Transmission Time Interval
UE User Equipment
UL Uplink
Carrier aggregation allows increasing transmission/reception bandwidth by aggregating component carriers. Prominent benefits of carrier aggregation include increased peak data rates, possibility to aggregate fragmented spectrum and fast load balancing.
In the specification of carrier aggregation in LTE Rel-10 in 3GPP, there is a common scheduler assumed per aggregated cells while MAC entities and PHY layers are separated per cell. However, as shown in reference [2], good performance and inter-user fairness can be achieved with separate per cell schedulers which communicate with each other and coordinate a scheduling metric calculation. This solution with separate and coordinated schedulers has complexity and scalability advantages compared to one common scheduler for all aggregated cells. An eNodeB protocol architecture based on this solution is shown in FIG. 1.
FIG. 1 shows an eNodeB architecture for DL CA with separate coordinated DL-schedulers included in a P-cell and an S-cell of a user equipment in a cellular communications system. PDCP and RLC layers in the P-cell (or S-cell) generate PDCP PDU(s) from radio bearer(s) and RLC PDU(s) from PDCP PDU(s), respectively. MAC layers in the P-cell and the S-cell generate MAC PDU(s) from RLC PDU(s) based on scheduling decisions concerning carrier aggregation, and the generated MAC PDU(s) are forwarded to the PHY layers of the P-cell and the S-cell. The DL-schedulers of the P-cell and the S-cell communicate with each other and with the MAC layers of the P-cell and the S-cell.
In an approach with separate per cell schedulers, since radio resources of each aggregated cell in a given TTI are allocated separately in each cell, based on the same assumptions on UE capabilities and a UE buffer level, the following problem arises.
The total allocated resources might exceed an amount of data in a UE buffer, or some other implementation or operator-specific limit, that can be transmitted in this TTI.
Further, the total allocated resources might exceed the UE capabilities. A maximum number of DL/UL-SCH transport block bits received/transmitted within a TTI (in all TBs) and a maximum number of bits of a DL/UL-SCH transport block received/transmitted within a TTI are specified in reference [1]. For example, UE category 3 is specified as follows:
Maximum MaximumnumberMaximumnumber ofof DL-SCHnumber of bitsTotal supportedtransport of a DL-SCHnumber layers block bitstransport blockof soft for spatialUEreceived received withinchannelmultiplexingCategorywithin a TTIa TTIbitsin DLCategory 31020487537612372482MaximumMaximum numbernumber of bitsof UL-SCHof an UL-SCHtransport block bitstransport blockSupport for UEtransmittedtransmitted64 QAM Categorywithin a TTIwithin a TT1in ULCategory 35102451024No
In principle, any UE category can support carrier aggregation. As can be seen, even though each cell fulfils the limit of the number of bits per single TB (i.e. maximum number of bits of a DL-SCH transport block received within a TTI), the total number of bits in all TBs can be exceeded because each cell can allocate additional TB(s). For example each cell allocates the TB of 75376 bits, the TB size is within the UE category 3, but the total number of bits in all TBs exceeds the UE category.
In case the total allocated resources exceed the UE category or the data available in the UE buffer, there will be an error and/or throughput loss and/or unnecessary padding.
This problem is overcome by the prior art solution in the following way. A scheduling metric is calculated by the separate DL-schedulers per cell considering MC and SC UEs, the DL-schedulers coordinate with each other to ensure fairness across MC and SC UEs. Based on separately calculated scheduling metrics one common priority list is constructed, e.g. MC UEs can be included multiple times in the list, for the purpose of resource allocation. If due to scheduling on multiple cells a MC UE reaches its rate limit, e.g. due to limited data in the buffer, another UE from the list can be scheduled and loss/error, e.g. due to a MC UE being allocated too much resources, is avoided.
However, this solution is not applicable to a two-dimensional time-frequency scheduling, i.e. the frequency domain scheduling cannot be done multiple times due eNB processing time constraints. The UEs that were not allocated frequency domain resources cannot be scheduled again if for some reason some resources are freed after frequency domain scheduling, e.g. due to PDCCH blocking.