1. Field of the Invention
This invention relates generally to the field of data processing systems. More particularly, the invention relates to a system and method for prioritizing data transports on a data processing device.
2. Description of the Related Art
A typical layered network architecture is illustrated in FIG. 1. Each layer within the architecture performs a specific function to reliably transmit data from a source node 195 (e.g., a client computer) to a destination node 196 (e.g., a network server). For example, when an application 190 has data to transmit to another application 191, the data is processed, in succession by a transport layer 180, a network layer 170 and a data-link layer 160 before being transmitted over the actual physical connection 165 between the two nodes. At the receiving node 196, the data is then processed in reverse order, by the data-link layer 161, the network layer 171, and the transport layer 185 before being handed off to the receiving application 191.
The descriptions below assume that the reader has at least a basic understanding of the functions of each of the network layers. For those interested, a detailed description of the network layers defined by the ISO Open Systems Interconnection model can be found in DILIP C. NAIK, INTERNET STANDARDS AND PROTOCOLS (1998) (see, e.g., Chapter 1, pages 3–11).
The well known TCP/IP protocol (“Transmission Control Protocol/Internet Protocol”) operates at the transport and network layers, respectively. The TCP transport layer is responsible for ensuring that all data associated with a particular data transmission arrives reliably and in the correct order at its destination. Specifically, in order to ensure reliable data transmission, a virtual connection 187 (also referred to herein as a “socket connection”) is established between a TCP socket 182 opened at the destination transport layer 185 and a TCP socket 181 opened at the source transport layer 180.
The TCP sockets 181, 182 perform flow control to ensure that the data transmitted form the source node is provided to the receiving node at an acceptable data rate. Specifically, a “window” is established defining the amount of outstanding data a source node 195 can send before it receives an acknowledgment back from the receiving node 196 (i.e., indicating that it has successfully received all or a portion of the data).
For example if a pair of nodes 195, 196 are initially communicating over TCP connection that has a TCP window size of 64 KB (kilobytes), the transmitting socket 181 can only send 64 KB of data and then it must stop and wait for an acknowledgment from the receiving socket 182 that some or all of the data has been received. If the receiving socket 182 acknowledges that all of the data has been received then the transmitting socket 181 is free to transmit another 64 KB. If, however, the transmitting socket 181 receives an acknowledgment from the receiver that it only received the first 32 KB (which could happen, for example, if the second 32 KB was still in transit or was lost), then the transmitting socket 181 will only send another 32 KB, since it cannot have more than 64 KB of unacknowledged data outstanding (i.e., the second 32 KB of data plus the third).
Thus, the TCP window throttles the transmission speed based on how quickly the receiving application can process it. The TCP window is typically defined by a 16-bit TCP header field. As such, the largest window that can be used for a standard TCP connection is 216 KB (64 KB).
A client may concurrently have several different socket connections open with a server or client, or with several different servers/clients. Each socket connection may not be utilized in the same manner, however. For example, the user may be interactively browsing web pages via one socket connection while receiving an automated software upgrade or e-mail message over another socket connection. This may result in a degradation of the interactive user experience, particularly on networks which allocate a relatively small amount of bandwidth per device (e.g., wireless networks such as Cellular Digital Packet Data and ARDIS networks). Under these circumstances, it would be useful to have the ability to prioritize the socket connections such that the interactive connections are provided with a relatively larger amount of bandwidth than the non-interactive connections.