The present invention generally relates to communications systems and, more particularly, to content delivery and distribution networks and components thereof, e.g., a content server, etc.
In a content delivery system, a content server may distribute different types of content, e.g., files, streaming video, etc., to different clients via a distributed communication network such as the Internet. Typically, this process is started by each client sending a request for particular content to the content server. As each request is received, the content server schedules delivery of the requested content to the requesting client, via the distributed communications network.
One approach for scheduling delivery of content to a client (or user) via a distributed communications network is the “Normalized Rate Earliest Delivery First” (NREDF) approach (also sometimes referred to in shorter form as “NRED”). In this approach, the content server computes a normalized rate for the delivery of particular content using an estimated “shortest path” between the content server and each requesting client. The “shortest path” is that path through the distributed communications network that provides the earliest delivery time of the requested content to the requesting client. As part of this process, the content server may identify other servers in the network that can source the requested content for delivery. The content server then schedules delivery of all the received client requests in the descending order of their associated computed normalized rates. As such, a requesting client with a higher computed normalized rate is scheduled for delivery before a requesting client with a lower computed normalized rate. However, new client requests may continue to arrive even after the content delivery server has just finished scheduling the previously received (or old) client requests. In this situation, as new client requests arrive those unfinished old client requests are rescheduled together with any new client requests in order to maximize the number of requests that can be served by the content server.
Unfortunately, as the client request rate increases, the content server begins to spend more and more time just computing the normalized rates and rescheduling old requests instead of actually delivering content. This behavior jeopardizes the scalability of the NREDF approach for serving large numbers of clients.