1. Field
The present disclosure relates generally to communication and more specifically to techniques for scheduling and transmitting Quality of Service (QoS) data flows with fairness for a low priority, “bursty” data flow.
2. Background
Wireless communication systems are widely deployed to provide various types of communication content such as voice, data, and so on. These systems may be multiple-access systems capable of supporting communication with multiple users by sharing the available system resources (e.g., bandwidth and transmit power). Examples of such multiple-access systems include code division multiple access (CDMA) systems, time division multiple access (TDMA) systems, frequency division multiple access (FDMA) systems, and orthogonal frequency division multiple access (OFDMA) systems.
Generally, a wireless multiple-access communication system can simultaneously support communication for multiple wireless terminals. Each terminal communicates with one or more base stations via transmissions on the forward and reverse links. The forward link (or downlink) refers to the communication link from the base stations to the terminals, and the reverse link (or uplink) refers to the communication link from the terminals to the base stations. This communication link may be established via a single-in-single-out, multiple-in-signal-out or a multiple-in-multiple-out (MIMO) system.
Traditionally, a Quality of Service (QoS) scheduling algorithm allows weights or priorities to be assigned to flows, so that the scheduler can give preference (in latency or throughput) to some flows over others. Algorithms such as weighted round-robin (WRR) or deficit round-robin (DRR) are two of the most common algorithms of this type.
In best-effort packet switching and other statistical multiplexing, round-robin scheduling can be used as an alternative to first-come first-serve queuing. A multiplexer, switch or router that provides round-robin scheduling has a separate queue for every data flow, where a data flow may be identified by its source and destination address. The algorithm lets every active data flow (that has data packets in queue) to take turns in transferring packets on a shared channel in a periodically repeated order. The scheduling is non-work conserving, meaning that if one flow is out of packets, the next data flow will take its place. Round-robin scheduling results in max-min fairness if the data packets are equally sized, since the data flow that has waited longest time is given scheduling priority. However, round-robin scheduling may not be desirable if the size of the jobs or tasks are strongly varying. A user that produces large jobs would be favored over other users. In that case fair queuing would be preferable.
If guaranteed or differentiated quality of service is offered, and not only best effort communication, deficit round robin (DRR) scheduling, weighted round robin (WRR) scheduling or weighted fair queuing (WFQ) may be considered.
In multiple access networks where several terminals are connected to a shared physical medium, round-robin scheduling may be provided by token passing channel access schemes such as token ring, as well as by polling or resource reservation from a central control station.
In a centralized wireless packet radio network, where many stations share one frequency channel, a scheduling algorithm in a central base station may reserve time slots for the mobile stations in a round-robin fashion and provide fairness. However, if link adaptation is used, it will take a much longer time to transmit a certain amount of data to “expensive” users than to others since the channel conditions differ. It would be more efficient to wait with the transmission until the channel conditions are improved, or at least to give scheduling priority to less expensive users. Round-robin scheduling does not utilize this. Higher throughput and system spectrum efficiency may be achieved by channel-dependent scheduling, for example, a proportionally fair algorithm, or maximum throughput scheduling. Note that the latter is characterized by undesirable scheduling starvation.
In an attempt to address this aspect, Weighted Round Robin (WRR) is a scheduling discipline. Each packet flow or connection has its own packet queue in a network interface. It is the simplest approximation of generalized processor sharing (GPS). While GPS serves infinitesimal amounts of data from each nonempty queue, WRR serves a number of packets for each nonempty queue: number=normalized (weight/mean_packet_size). As a weakness, to obtain a normalized set of weights that is required to approximate GPS, a mean packet size must be known. In IP networks with their variable packet size, the mean has to be estimated, which makes good GPS approximation hard to achieve in practice. Another weakness of WRR is that it cannot guarantee fair link sharing.
Deficit round robin (DRR), also deficit weighted round robin (DWRR), is a modified weighted round robin scheduling discipline that can handle packets of variable size without knowing their mean size. A maximum packet size number is subtracted from the packet length, and packets that exceed that number are held back until the next visit of the scheduler.
WRR serves every nonempty queue whereas DRR serves packets at the head of every nonempty queue having a deficit counter that is greater than the size of queued packet. Queues that have queued data that is lower than their corresponding deficit counter have their deficit counter increased by some given value called a quantum. Those queues that have queued packet smaller than the deficit counter can be transmitted with the deficit counter subsequently decreased by the size of packets being served.
While these scheduling approaches achieve a degree of fairness and network efficiency under certain situations, a terminal that tends to be idle for long periods of time suffers on those occasions when the terminal has a “burst” of data packets to send. A low priority “bursty” data flow can encounter long latency.