Cellular communication networks evolve towards higher data rates, together with improved capacity and coverage. In the 3rd Generation Partnership Project (3GPP) standardization body, technologies like Global System for Mobile Communication (GSM), General Packet Radio Service (GPRS), Wideband Code Division Multiple Access (WCDMA), High Speed Packet Access (HSPA) and Long Term Evolution (LTE) have been and are currently being developed.
LTE is the latest technology standardized. LTE uses an access technology based on Orthogonal Frequency Divisional Multiplexing (OFDM) for downlink communication (DL), i.e. communication from a base station, called eNodeB in LTE, to a user equipment (UE), and Single Carrier FDMA (SC-FDMA) for uplink communication (UL), i.e. communication from a UE to an eNodeB. The resource allocation to UEs on both DL and UL is performed adaptively by a scheduling mechanism called fast scheduling that takes into account current traffic patterns and radio propagation characteristics for each UE. Assigning resources in both DL and UL is performed in a scheduler situated in the eNodeB.
In LTE, data packets in all services are delivered using the IP protocol. This means that also delay-sensitive data such as voice conversation, which traditionally has been a circuit switched service, is sent over IP. The voice conversation service may then be called Voice over IP (VoIP).
For VoIP being a real-time service, there is not much time for queuing of data packets and for retransmissions. Especially for users experiencing bad channel conditions, such as users at a cell edge, a data frame arriving at a receiver may have such a high frame error rate that it needs to be retransmitted for the receiver to understand the data. When many data packets have to be retransmitted, there will be an increased data packet delay resulting in bad voice quality. Also, frequent retransmissions for one user equipment take up system resources and will thereby reduce the total system performance. Consequently, the retransmissions to/from a user equipment experiencing bad channel quality will lead to increased packet delay to the user equipment and will also require much system resources which will lead to reduced voice quality for other user equipments in the cell.
A way to solve such a problem may be to split a VoIP packet into a number of segments, which are transmitted over the air interface individually. Since each segment is smaller than the VoIP packet, each segment can be transmitted with a larger success probability than the VoIP packet. But since every segment needs its own control information in a header, such as a Radio Link Control and a Medium Access Control header, the transmission of many small segments will result in increased overhead, and thereby decreased system capacity. Also, load on control channels will increase since smaller scheduling units mean that more scheduling needs to be performed and every segment requires a new control message, e.g. a Physical Downlink Control Channel message.
To alleviate this problem a mechanism called Transmission Time Interval (TTI) bundling has been standardized in the 3GPP for UMTS, LTE etc. A TTI is generally a duration of time for a transmission over an air interface. Especially, TTI relates to encapsulation of higher layer data into frames and further into packets for transmission on the radio link layer.
When TTI bundling is used for a UE, the same VoIP packet is transmitted in four consecutive TTIs. The receiver can then combine the four received TTIs using a Hybrid Automatic Repeat Request (HARQ) mechanism and get effectively four times the received energy for the same VoIP packet. With this increase in received energy, the VoIP packet can be received with better quality and without extensive retransmission or segmentation, thus leading to decreased packet delay.
But since the same VoIP frame is transmitted four times in a row, lots of transmission resources are used for the transmission, i.e. resources that may otherwise have been used for other UEs in the cell. Also, when TTI bundling is used, only a limited number of physical resource blocks, and the most robust Modulation and Coding Schemes can be used. Hence the transport block size and thereby also the throughput that can be achieved for a user using TTI bundling is very limited.
Consequently, the use of TTI bundling in a cell should be limited, e.g. to user equipments in need of TTI bundling and/or to a maximum amount of user equipments in the cell. This implies that it is necessary to perform switching of user equipments from a TTI bundling enabled mode to a TTI bundling disabled mode, and vice versa.
To perform a switch between TTI bundling enabled mode and TTI bundling disabled mode in LTE today is initiated through an RRC Connection Reconfiguration Request message sent from the eNodeB to the UE. The duration of the whole procedure from initiation to completion of the switch varies but can be as high as 50-100 ms. During this time period no data packets can be transmitted to or from the UE. This means that an extra delay of up to 50-100 ms will be added to the already existing data frame delay caused by e.g. bad reception quality. Thus, the total delay may become so large that the quality of the received speech will be reduced, and speech frames received too late may need to be discarded.