When an RTP scheduler selects which packets to send, it typically processes the streams in one-by-one order, according to either a round-robin scheme, in which each stream has equal priority, or some prioritized scheme in which streams having higher priority are scheduled with greater frequency than those with low priority. When the network becomes congested, however, these types of scheduling may present problems, in particular, the last-scheduled stream may experience a larger number of dropped packets than earlier transmitted streams.
To understand how this may happen, consider the transmission of three streams from a home server to two audio/video (A/V) clients using real-time protocol/real time streaming protocol (RTP/RTSP) in a network that is composed of an Ethernet® portion and a wireless portion. In this example, the wireless portion has a limited bandwidth and is accessed from a wireless access point (AP) on the Ethernet network. In this example, congestion may occur at the bridge between the Ethernet portion and the wireless portion of the network.
In the exemplary system, it is assumed that three RTP streams having the same constant bit-rate are to be sent from the wired network to respective wireless AV devices coupled to the network via the AP. In this simplified example, it is also assumed that these packets are the only traffic on the network between the wired and wireless portions. The scheduler is coupled to all three of the RTP sources and schedules the packets for transmission over the network. The scheduling of the packets occurs at known time intervals with a granularity that is determined by clock parameters of the scheduler. In an exemplary operating system (e.g. Microsoft® Windows,®) the granularity of the transmission interval is, in general, larger than one millisecond, which is much larger than a typical Ethernet packet. For example, a 100 Mbps Ethernet network typically transmits packets having a maximum size of about 1.5 KB. This packet may be transmitted in less than one-tenth of a millisecond. Consequently, the sequence of packets may appear as shown in FIG. 1.
FIG. 1 shows N scheduling windows (1001 through 100N). Packets for stream 1 (110), stream 2 (112) and stream 3 (114) are sequentially scheduled in respective slots at the start of each scheduling window. In this example, following the stream 3 packet, the scheduling window is idle until the next scheduling period (116). The server transmits the scheduled packets during the next transmission interval. This example assumes that a backlog of queued packets exists at all times. In other words, the scheduler schedules “expired” packets, that is to say packets that should have been sent prior to the current moment and are now queued for transmission. While this scheduling algorithm typically works well for maintaining streaming rates for multiple streams, it may not operate well in a congested network. This is because different streams may experience different levels of dropped packets when, for example, the streams are sent at a higher rate than can be absorbed by the wireless portion of the network. In particular, the third stream may have more gaps because its packets are more likely to experience a full buffer at the wireless bridge than packets of the first or second streams. Although the exemplary embodiments show only one expired packet for each stream being scheduled during a scheduling window, depending on the maximum data rate of each stream, it is contemplated that multiple packets from a data stream may be scheduled during each scheduling window.
Another way to view the problem is to consider a congested network in having a router with a limited buffer queue for incoming packets. In this network if batches of packets are sent in order by first stream, second stream and third stream, then the first stream will be the least likely to experience packet loss and the third stream will be the most likely to experience packet loss. This may be seen, in the exemplary buffer of FIGS. 2A through 2F. These Figures shows a buffer 200 at various points in time after receiving packets as transmitted using the scheduling algorithm shown in FIG. 1. This exemplary buffer 200 is a first-in-first-out (FIFO) buffer that is filled from the top and emptied from the bottom. In this example, the buffer is temporarily congested such that, during the time interval covered by the buffer diagrams of FIGS. 2A through 2F, two packets are fetched from the buffer 200 in the time that one batch of three packets is received. The operation of this buffer is simplified in order to illustrate the problem addressed by the subject invention. It does not, for example, take into consideration the transfer of packets from other sources to the wireless network via the network bridge. It also assumes that the three data streams have equal data rates. These simplifications are assumed to clarify the explanation of the problem. The illustrated level of congestion is extreme, as one of every three packets is dropped. In a typical congested network the drop rate may be much less, for example one packet in 10 or one packet in 100.
FIG. 2A shows the buffer 200 with one available slot 210. In FIG. 2B, two packets have been removed from buffer slots 214 and 216 and the remaining packets have been shifted down, leaving space to store three new packets in slots 213, 212 and 210. As shown in the Figure, the stream 1 packet is received first, followed by the stream 2 and stream 3 packets. FIG. 2C illustrates the buffer 200 after the next batch of packets has been received. As before, two packets are removed from slots 214 and 216 of the buffer 200 and the remaining packets have been shifted down. This leaves only two slots 210 and 212 to receive the next batch of three packets. As shown in FIG. 2C, the stream 1 packet is stored into slot 212 and the stream 2 packet is stored in slot 210. The stream 3 packet (not shown) can not be received because the buffer 200 is full. Thus, the stream 3 packet is dropped. FIGS. 2D, 2E and 2F show the buffer at subsequent time intervals. During each time interval, two packets have been fetched from the buffer while a batch of three packets arrived to be stored. In each of these Figures, one packet must be dropped and in all cases, it is the packet for stream 3. Thus, this scheduling scheme applied to the exemplary simplified buffer system effectively drops stream 3 entirely.