Wireless terminals are enabled to communicate wirelessly in a cellular communications network or wireless communication network, sometimes also referred to as a cellular radio system or cellular networks. Communication devices such as wireless terminals are also known as e.g. User Equipments (UE), mobile terminals and/or mobile stations. Wireless terminals may further be referred to as mobile telephones, cellular telephones, laptops, tablet computers or surf plates with wireless capability, just to mention some further examples. The communication may be performed e.g. between two wireless terminals, between a wireless terminal and a regular telephone and/or between a wireless terminal and a server via a Radio Access Network (RAN) and possibly one or more core networks, comprised within the cellular communications network.
The cellular communications network covers a geographical area which is divided into cell areas, wherein each cell area being served by an access node. A cell is the geographical area where radio coverage is provided by the access node.
The access node may further control several transmission points, e.g. having Radio Units (RRUs). A cell can thus comprise one or more access nodes each controlling one or more transmission/reception points. A transmission point, also referred to as a transmission/reception point, is an entity that transmits and/or receives radio signals. The entity has a position in space, e.g. an antenna. An access node is an entity that controls one or more transmission points. The access node may e.g. be a base station such as a Radio Base Station (RBS), eNB, eNodeB, NodeB, B node, or BTS (Base Transceiver Station), depending on the technology and terminology used. The base stations may be of different classes such as e.g. macro eNodeB, home eNodeB or pico base station, based on transmission power and thereby also cell size.
Further, each access node may support one or several communication technologies. The access nodes communicate over an air interface operating on radio frequencies with the wireless terminals within range of the access node. In the context of this disclosure, the expression Downlink (DL) is used for the transmission path from the base station to the wireless terminal. The expression Uplink (UL) is used for the transmission path in the opposite direction i.e. from the wireless terminal to the base station.
In 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE), base stations, which may be referred to as eNodeBs or even eNBs, may be directly connected to one or more core networks.
3GPP LTE radio access standard has been written in order to support high bitrates and low latency both for uplink and downlink traffic. All data transmission is in LTE is controlled by the radio base station.
Wireless communication has been overtaking the wired communication since the last decade of last century, and the transmitted data volume has been increased dramatically every year. From making a voice call, to sending SMS, to surfing the web, sharing data with friends and so on, wireless communication has changed significantly, and now it is playing an important role in people's normal life.
After several evolutions from GSM to WCDMA, the most recent wireless technology, LTE treats all the transmitted data in the same way as Internet protocol (IP) data and follow the same protocol and algorithms at higher layers such as e.g. Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) regardless of the traffic type. It makes the system easier to maintain and also it simplifies hardware implementation. However, it makes the scheduling algorithm more complicated in the base station, in order to fulfill the Quality of Service (QoS) requirements of different traffic.
Scheduling Strategy
It is the frequency spectrum that is used to carry all the transmitted data in the wireless network. Due to the limited amount of spectrum, and increasing users and data volume, it is very critical to utilize the frequency resource more efficient than ever before. In order to fully explore the frequency resources, a scheduler in a base station is performing a resource allocation algorithm. The base station which may be referred to as eNB in LTE, is making the scheduling decision every Transmit Time Interval (TTI), where it should be decided how the frequency resource should be allocated among all the user equipments. Generally, the scheduler prioritizes user equipments according to a QoS requirement of different user equipment's data traffic; for example, control signaling is always prioritized over web data traffic. Control signaling may refer to those data carried by Singal Radio Bearer (SRB), e.g. the data using Radio Resource Control (RRC) protocol.
Transport Block Size (TBS)
To enable efficient usage of the frequency spectrum, different Modulation and Coding schemes (MCS) are utilized to maximize bits per Hertz (Hz). As known, the Quadrature Phase Shift Keying (QPSK), 16 constellation points Quadrature Amplitude Modulation (16QAM), and 64 constellation points QAM (64QAM) are all used in the LTE system. Higher order modulation means larger number of bits per Hz but lower robustness. A link adaptation algorithm is used to select the MCS, according to a Hybrid Automatic Repeat Request (HARQ) operation as well as the user equipment's channel condition and power condition.
Based on the number of allocated Physical Resource Blocks (PRBs), and the selected MCS, the TBS is calculated according to 3GPP 36.213, Table 7.1.7.2.1-1. According to such table, the TBS may vary from 16 bits to 75376 bits with 20 Mhz bandwidth System where the maximum number of bits is increased linearly with the bandwidth. Simply, the TBS may be imagined as the amount of information bits a user equipment can transmit within one scheduling opportunity. The information bits mentioned here means the Media Access Control (MAC) Packet Data Unit (PDU) size which relates to bits transmitted in the physical layer, including both a MAC header e.g. mac control elements, and a MAC payload.
Delay Sensitive Traffic
In the evolution of wireless communication networks, more and more applications are using LTE as a data transmission network. Throughput turns not to be the only key parameter of transmission quality, but also other requirements acquire importance depending on the QoS of some specific traffic. For instance, Voice over IP (VoIP) is one type of traffic that is less throughput sensitive, but packet delay sensitive. VoIP is a protocol for the delivery of voice communications and multimedia sessions over IP networks, such as the Internet. A lower packet delay gives a better quality of a VoIP service than higher physical bit rate. Simply said, if an Real-time Transport Protocol (RTP) packet of VoIP cannot be transmitted on time, it will be useless. The real time video and online gaming are also classified as delay sensitive traffic.
VoIP Traffic Mode
VoIP traffic comprises of two different modes, one TALK mode and one Silence Indicator (SID) mode. Literally, TALK mode indicates that a user equipment is talking, while SID mode indicates that the user equipment is listening. A packet comprises two kinds of data, control information such as a header, and user data also referred to as payload. The packets are generated with different size and intensity for the two modes. It is commonly 20 ms interval time during TALK Mode and 160 ms during SID mode. Typically, an RTP payload size in Talk mode is much larger than the packet Size in SID mode, depending on the used codec on the Adaptive Multi Rate (AMR) codec. For example, by using the AMR codec, with 12.2 kbps, it gives around 256 bits of RTP payload. Without Robust Header Compression (ROHC), all RTP/UDP/IP/Packet Data Convergence Protocol (PDCP)/RLC headers may make a Radio Link Control (RLC) Service Data Unit (SDU) packet at a MAC layer during TALK which is as much as 594 bits considering IPv4 is used. Note that header sizes of different protocol level may be slightly different, depending on the configurations.
Delay Based Scheduling (DBS)
As mentioned above, a scheduler performs scheduling decision every TTI to allocate resources among user equipments. Different scheduling algorithms are employed, in order to meet different QoS requirements. Round Robin (RR) scheduling and Proportional Fair (PF) scheduling are two commonly used scheduling algorithms, where the aim of RR is to get the absolute fairness, while PF aims to maintain a balance level between fairness and system throughput.
Additionally, Delay Based Scheduling (DBS) is another algorithm that is optimized for delay sensitive traffic, such as VoIP traffic. It considers the packet delay of different user equipments when performing prioritization among the user equipments. In most of cases, the user equipments with older packet in a buffer is prioritized over the others.
In order to meet the QoS requirement most of the packets of delay sensitive traffic shall arrive within a time budget. For the sake of simplicity, VoIP is taken as one example of delay sensitive traffic to illustrate this problem. VoIP packets are generated periodically as described above. According to a delay requirement of each single packet, in theory, the base station such as an eNB must always maintain a minimum bit rate for each VoIP user equipment in order to meet the requirement.
Assuming that 12.2 kilobits per second (kbps) is used as the VoIP RTP codec rate, if considering also the protocol headers, it will need at least 25-200 kbps to transmit the 12.2 kilobits (Kbits) RTP VoIP traffic, depending on the maximum number of segmentations for one VoIP packet. In case of a bad channel condition, the scheduler does not give enough TBS for the whole VoIP talk packet, instead RLC may chop the whole packet into small segments and send them one by one in a physical layer with a small TBS. However, one more segment requires one more MAC header which will increase the total bits rates on MAC Layer. If any of those small segments cannot be successfully transmitted on time due to any reason, all those transmitted segments will be discarded, and the QoS requirement will fail. One obvious reason in this case may be that the scheduling capacity is lower than the required bit rate.
The prior art link adaptation has been designed to adapt the modulation scheme according to the Noise Ratio (SINR), in order to achieve a stable transmission error rate, e.g. 10% retransmission. However this strict algorithm used to be too robust and could not work well in case of congestion, which results into huge buffer queuing and also starvation to each other. Thus a problem is that when the user equipment in congestion e.g. a bad channel condition such as e.g. temporarily in channel fading dip or at the cell border with strong interference, the base station cannot provide a certain VoIP user equipment with the required bit rate. The consequence is that the QoS requirement of such a VoIP user equipment will not be met regardless of which scheduling algorithm has been used, and all the scheduled resources allocated to this user equipment are useless and wasted. Moreover, according to the basic principle of DBS, those user equipments will very likely get higher priority than other user equipments, which may leave no scheduling resources for other user equipments in the same cell.
In order to illustrate the problem, a simple example is used to clarify it. Assuming that one VoIP Packet is 596 bits, i.e. 73 bytes, no ROHC, IPv4, AMR 12.2 Codec, and a user equipment that is at the cell edge where the Signal to Interference plus SINR is very low and the power is limited, only very small TBS, can be used in order to meet the required 10% HARQ Block Error Rate (BLER). Since a MAC Header may be as large as 7 bytes, a typical TBS that is used in the bad SINR is 9 bytes to be able to carry minimum 2 bytes of payload. One byte is a grouping of 8 bits. According to 3GPP 36.213, Table 7.1.7.2.1-1, 9 bytes may be transmitted via physical layer during one transmission with MCS 2 and 2 SBs. In a worst case scenario, e.g. if also Buffer Status Report (BSR) and Power Headroom Report (PHR) information are transferred within the MAC PDU, the MAC headers may be 7 bytes, which implies that only 2 bytes may be used for transmitting the payload. Since the VoIP traffic is generated every 20 ms, but within the 20 millisecond (ms) only 2*20 payload bytes can be transmitted, this means that the base station will never be able to empty the buffer of the user equipment and satisfy latency requirements. Instead, the buffer of the user equipment will keep piling up. At the same time this user equipment is wasting one scheduling opportunity every TTI. In case there are other user equipments in the system, they might be prevented from getting scheduled and hence they may suffer from starvation issues. FIG. 1 illustrates one example of how a buffer of a user equipment is piling up in an extreme case where all the scheduling resources are wasted by this user equipment. An PDCP VoIP packet of 73 bytes is considered.
Note that FIG. 1 shows one of the extreme case which may be rare in a real situation, but still it illustrates the situation where the scheduled bits within 20 ms cannot catch up the coming data of this user equipment. When it happens, all scheduling resources are wasted and starving the other user equipments in the system. This problem will be linearly worse by the increased number of active user equipments in the system.