In data packet based communication systems, i.e. systems in which information to be transmitted is divided into a plurality of packets and the individual packets are sent over a communication link, it is known to provide queue buffers at various points in the network. A buffer may be an ending or input buffer (i.e. a buffer for data packets that are to be sent over a link) or a receiving or output buffer (i.e. buffer for data packets that have already been sent over a link).
Packets for transporting data may be called by any of a variety of names, such as protocol data packets, frames, segments, cells, etc., depending on specific context, the specific protocol used, and certain other conventions. In the context of the present document, such packets of data are typically referred to as data packets. The procedures for placing data packets into a queue, advancing them in queue, and removing data packets from the queue are referred to as “queue management”.
A phenomenon that is known in data packet transmission networks is that of congestion. Congestion implies a state in which it is not possible to readily handle the number of data packets that are required to be transported over that connection or link. As a consequence of congestion at a given link, the number of data packets in a queue buffer associated with said link will increase. In response to a congestion condition, it is known to implement a data packet dropping mechanism referred to as “drop-on-full”. According to this mechanism, upon receipt of data packet at the queue buffer, a queue length related parameter, such as the actual queue length or the average queue length, is compared to a predetermined threshold. If the predetermined threshold is exceeded, then a data packet is dropped. The threshold indicates the “full” state of the queue.
The data packet which is dropped may be the newly arrived packet, in which case the mechanism is called “tail-drop”. Besides the technique of tail-drop, it is also known to perform a so-called “random-drop”, where a data packet already in the queue is selected according to random function, or so-called “front-drop”, where the first data packet in the queue is dropped. Such drop-on-full mechanisms not only serve to reduce the load on the congested link, but also serve as an implicit congestion notification to the source and/or destination of the data packet.
Such queue management will be discussed in this document, and examples that will be described in some detail will involve a mobile broadband environment. Currently, upgrades of the third generation (3G) wideband code division multiple access (WCDMA) technology are performed to provide higher data rates for both the downlink and the uplink channel. The first phase is mainly targeted at higher downlink rates up to 14 Mbps. This is already implemented in commercial networks and referred to as high speed downlink packet access (HSDPA). Higher uplink rates, up to 6 Mbps, will soon be provided by high speed uplink packet access (HSUPA), which is also known as Enhanced Uplink (EUL). This combination of HSDPA and HSUPA is commonly referred to as high speed packet access (HSPA).
The standardization work of further upgrades of 3G systems is currently in progress to provide even higher data rates and decreased transmission delays. These will be achieved by further enhancements to HSPA (e-HSPA) which are still based on WCDMA. The Long Term Evolution (LTE) will allow utilizing wider frequency bands. Common for these technologies are the existence of a high speed wireless link that is shared by all mobile terminals in a cell. The transmission on this shared channel is controlled from the network by a scheduler that works according to network specific algorithms. The scheduler transmits channel access grants to the terminals in the cell to control who is allowed to use the shared channel. This access grant process signaling is very fast and access grant may be changed between users several times per second. The scheduler algorithm, the number of active terminals and the current radio resource situation in the cell is unknown to the mobile terminal. This results in that the wireless link as seen from a mobile terminal can have large rate variations and may in worst case change from several Mbps to a few hundred Kbps several times every second.
Regardless of the enhanced data rates that these upgrades provide, the wireless link is likely still the bottleneck of an end-to-end connection. With varying radio conditions and varying bandwidth in the wireless link the uplink buffer in the mobile terminal will have a varying queue size. Some kind of management of the buffer is therefore needed to achieve good link utilization and low delays. The most straightforward approach would be to let all incoming data be buffered regardless of the link conditions. However, this approach has many drawbacks. First of all, the buffer capacity is physically limited. Furthermore, a number of problems are experienced with having large queues, such as excessive end-to-end packet delays, unfairness between competing flows, latency induced to other traffic sharing the same buffer, and slow reactivity in web surfing. To keep the size of the queue “appropriate”, a scheme for managing queue buffers is hence needed.
Prior art solutions include a packed discard prevention counter algorithm (PDPC) for traditional WCDMA links (e.g. a dedicated channel is assigned per TCP flow) as shown in Sagfors M., Ludwig R., Meyer M., Peisa J., “Queue Management for TCP Traffic over 3G Links”, IEEE, March 2003, and Sagfors M., Ludwig R., Meyer M., Peisa J., “Buffer Management for Rate-Varying 3G Wireless Links Supporting TCP Traffic, IEEE, April 2003.
Furthermore, WO-02098153 A1 describes a method of managing a data packet queue in a buffer. Having defined minimum and maximum threshold levels for the packet queue, the method will for those data packets that are received by the buffer, perform a congestion avoidance procedure when the threshold exceeds the maximum level or when the buffer queue lies between the defined levels. And further not perform a congestion avoidance procedure when the buffer queue is less than the minimum threshold.
However, these prior art disclosures have a drawback in that the link over which buffered packets are to be transmitted runs a risk of being under-utilized. That is, to ensure that the link is fully utilized after a packet drop the TCP pipe capacity (the minimum amount of data a TCP flow needs to have in flight to fully utilize the bandwidth of the bottle-neck interface) needs to be buffered at the time of the packet drop. Since the pipe capacity is very dependent on the bandwidth of the link, such prior art solutions are not optimal in environments where the bandwidth of the link may vary on a short time scale.