In past few decades, a scale of Internet was expanded rapidly, and network became a necessary part of people's daily life.
In a conventional network structure, after a user initiates a request for an access service, a service request packet initiated by the user will be forwarded by a number of routers distributed in the network to reach a server corresponding to the service request. Every time the service request packet is routed and forwarded, a delay may be caused. In the case that the scale of the network becomes larger and larger, the delay will be more and more severe.
In an existing technology, in order to reduce a delay time of accessing a server by a user, a dedicated cache server may be provided by many content service providers, in a server at an area accessed by users in a high density. The cache server is adapted to cache information such as webpage and files, which is needed by the users. Times that the packet is routed and forwarded may be reduced by providing the dedicated cache server, when the user accesses the server in the high density access area. In this way, times of switching in a network can be reduced, and the delay time of the access of the user can be reduced.
However, due to a large number of users accessing the high density access area, a network load may be increased, causing network congestion. In addition, the server in the high density access area is merely a content node provided by the content service provider. The content node cannot know a topological structure and a load condition of the network. Therefore, when the user accesses the server in the high density access area, a routing will be selected in a low flexibility.