In the resource buffering technologies at present, a scheduling server in a resource buffering system judges whether the number of times of accessing a resource reaches a preset threshold according to the access popularity of the resource in order to relieve egress bandwidth pressure. After the access popularity of the resource reaches the preset popularity threshold, the scheduling server schedules a cache server according to a scheduling algorithm to download resources that need to be buffered. After the download is complete, the cache server notifies the database of the scheduling server that the resource has been buffered on a cache server. In practice, only one cache server in the system is available for providing the buffering service.
If a user needs to download the resource, the system checks whether the resource is buffered in the system. If the resource is buffered, the system checks information about the cache server A that buffers the resource so that the user can download the resource from the cache server A.
In the process of developing the present disclosure, the inventor finds at least the following problems in the conventional art:
In the resource buffering system in the conventional art, the loads are not sharable between multiple nodes, and the mechanism of backing up popular resources on multiple nodes is not supported. As shown in FIG. 1, resource A and resource B are buffered in cache server 1; resource F and resource X are buffered in cache server 2; and resource N, resource P, and resource O are buffered in cache server n. If the user wants to download resource F, the user has to download the resource from the cache server 2 that buffers the resource only, and the relatively idle cache server is unable to share loads with the busy cache server. Besides, if a server that buffers a popular resource fails, plenty of users have to download the resource through the external network, which leads to abrupt rise of the egress bandwidth pressure.