FIG. 1 illustrates a telecommunications network 100. The network 100 could be, for example, the Internet or World Wide Web. As is well known, such a network typically includes interconnected routers and server computers (servers), such as routers 110, 120, 130, 140 and 150 and servers/data centers 155, 165, and 175. “Data center” as used herein refers to a plurality of servers collectively located at a single site or node of a network. Thus, elements 155, 165 and 175 may represent either single servers or a plurality of servers grouped into a data center.
As is further well known, client computers (clients) such as clients 101, 102 and 103 may transmit requests for services, such as e-mail, Web pages, database searches and the like, to servers, and receive data in return. Software known as TCP/IP (Transmission Control Protocol/Internet Protocol) is typically used to handle exchanges of data between clients and servers. Requests issued by clients are broken down into data packets by TCP (Transmission Control Protocol) and assigned a destination address by IP (Internet Protocol). The packets travel over the transmission media connecting the clients and servers, often by different routes, and are re-assembled by TCP at the servers.
Routers are responsible for ensuring that the packets arrive at their proper destinations. Routers typically read a data packet to obtain the destination address, calculate a route through the network for the packet, and then send the packet on toward its final destination. Data packets typically pass through at least one router on the way to their destinations, and usually pass through more than one router.
Routers and the transmission media linking them may sometimes be referred to as the “backbone” of a network. Because traffic through this backbone may sometimes be of a very high volume, it is not always possible to send a packet to its destination by an optimal (i.e., the fastest possible) route. Moreover, because the provision of Internet services and resources can be a source of substantial revenue to various commercial entities, the allocation of backbone bandwidth is an important concern. “Bandwidth” here refers generally to the speed and volume of data transmission over the backbone.
Accordingly, the concept of “quality of service” has arisen in the telecommunications industry. The concept of quality of service links the amount of money that a client of telecommunications services is willing to pay with the amount of bandwidth allocated to the client.
In known applications of the quality of service concept, the determination as to which client is to be allocated more bandwidth or less bandwidth, compared with other clients, is made at the backbone level. More particularly, a router may read a data packet to determine which client generated the packet or is to receive the packet, based on the contents of the packet. Depending upon who the client is, or more particularly, what the client pays for services, an optimal or less-than-optimal route may be assigned to the packet.
As illustrated in FIG. 1, servers are also part of the network, and therefore server performance clearly has an effect on bandwidth. However, the application of quality of service discrimination for traffic at the servers is not currently known to be practiced. Accordingly, the present invention offers a method and system for applying quality of service discrimination at the server level, thereby, as is appropriate, rewarding clients who are willing to pay more with a higher quality of service.