The invention arose from a consideration of Internet Web Servers and it is convenient to discuss it in that context, but the invention has wider applicability to other networks. Current web server systems, for example e-commerce systems, application servers or any other web accessible system, intended as web-servers for connection to the Internet typically comprise a web-tier, an application tier and a storage tier, see for example FIG. 1. The web-tier is typically highly replicated and homogeneous having a large number of web servers which have data and applications highly distributed over them in a homogenous manner: the servers all do the same thing. Each of these web-servers will serve the same data for a given service provider (e.g. xSP Internet, application or storage) in order to spread the accessibility of the data to thousands of users. The data content of each server in the web tier is therefore identical across all of the servers. This results in a massive utilisation of disc space, in which some of the data content is not heavily accessed. This leads to a large amount of often redundant storage: a lot of data content and application may be not be being served out at all frequently.
Load balancing (directing a specific request for a specific resource to be served out to a specific chosen sever on the web tier) can be used to attempt to provide a better, faster, service to users of the World Wide Web. For example “IP Virtual Server” software exists for Linux. Current web-based load balancing techniques for balancing the load between web tier servers are rudimentary and in one known version of load balancing involve a principal server, router, or director server, distributing requests for data to a series of identical data content servers sequentially in turn until a server capable of servicing the request is found. What the director server is looking for is a server with the processing power free to service the request for data. It does this by asking a series of servers in “Round-Robin” until it finds one capable. An alternative load-balancing technique for web tier servers is to have the director server (or router) send an investigatory signal to the web tier servers and assess which server had the quickest response time, and to direct the request to be serviced to the web tier data content server which replied fastest. This technique of measuring response time is primarily a measure of the telecommunications links to the web tier servers: the capacity of the telecoms links is the major factor in response time. Depending upon whether the data content web tier server has a dedicated IC (interface card) or not, the response time may be influenced slightly by how busy the CPU of the web tier server is, but telecoms factors far outweigh this usually.
Application servers (i.e. servers in a network serving to the network particular applications—often different applications on different networked servers) have a problem of scalability if demand for a particular application rises. Clustering is one answer to problems of providing greater access to data and functionality, but it is expensive to replicate data content and functionality, and it is difficult to expand the capacity of a cluster of services horizontally by the addition of more resources in real time whilst the system is operational.
Clusters are not easily scaled horizontally by the addition of more network attached storage (NAS) at the web-tier level. NAS typically does not scale well horizontally as it is attached via a network interface card (NIC) to the network and there is a limit on the number of network connections allowed by the NIC. A NIC has a capacity to handle a limited number of connections. Cards are typically rated at 10 Mbits−1/100 Mbits−1/1000 Mbits−1. Clusters typically require the purchase of expensive, cluster certified disc arrays and fibre channel, to support shared data between clustered servers.
Clustered systems typically fall into either a ‘shared everything’ class: where fibre channel, storage and switches etc. are shared by the clustered machines, or a ‘shared nothing’ class: where each machine on the cluster has its own storage, fibre channel and switches etc. It is difficult to configure the cluster. The ‘shared nothing’ arrangement is very expensive with high end disc arrays costing around $300 k per TeraByte (TB), and also as each disc cluster will contain similar data content at each server, or node, the expenditure on storage, and other peripherals, rapidly escalates.
A further problem with current web tier servers is that it is difficult and expensive to add extra data. For example, in the field of Internet Video Serving (serving out video movies over the internet) a video website may have, say, ten web tier servers each having a copy of the one hundred most popular video films on them. A director server, or router, receives a request for a specific video and directs it to a chosen one of the ten servers either on the basis of “Round-Robin”, or by assessing telecoms response time. The chosen server serves out the selected video. However, let us imagine that a new video is to be added to the available videos. The new video is loaded into the memories of each of the ten web tier servers and added to the available videos deliverable in the directory of the director server. It will probably be necessary to delete a video from the available number of videos to make room in the memories of the web tier servers.
A lot of the memory of each web tier server is not actively used in any given period of time: a lot of it is redundant most of the time, but is needed in case there is a request for a less frequently requested video.
There are difficulties in horizontally scaling. Adding another web tier server means updating the director server and copying the data content of the other web tier servers to the new web tier server, so that they are the same.
If it is desired to increase the number of video titles available at that website it is necessary to increase the memory capacity (e.g. disc capacity) of each of the web tier servers so that they can accommodate more videos.
Currently collections of servers that deliver content, e.g. streaming of videos, to a user are unaware of their storage capacity, connection capacity and bandwidth usage. A central management tool, typically a management protocol such as simple network management protocol (SNMP), loaded on an overseer machine, can in known systems assume a wide scale, low level monitoring responsibility that will typically include tripping an alert on a monitoring station if a server, or other network element, fails or if network traffic exceeds a threshold value. An attendant problem with this arrangement is that at the time that an alert is registered it may be too late for a network administrator (person) to introduce a replacement server, or other additional network element, prior to a catastrophic system failure. Also, there may not be a network administrator present at all times to react to a warning message.
An example, the wide variations in demand for web-sites, for example an increased demand for information, or live video feed during major sporting events, has resulted in web-sites crashing as the systems administrator cannot establish the rate of change of requests quickly enough in order to add resources quickly enough to cope with the fluctuations in demand. A known solution to this problem is to massively over provide for the availability of data to users: to have much more data-serving capacity than is normally needed. This is expensive and inefficient as at times of low data demand it results in large amounts of storage devices lying idle. High end disc arrays typically cost $300 k per Tera Byte (TB).
It is possible to provide clusters of servers in order to accommodate fluctuations in demand for data. However, as mentioned, clusters typically require the purchase of expensive, cluster certified disc arrays and network infrastructure, typically fibre channel, to support shared data between network nodes. Additionally, clusters tend to be built in advance of demand and are not readily horizontally scaleable, for example by the addition of network attached storage (NAS) or by the addition of direct attached storage (DAS) to servers.
Video serving over the internet is currently not very popular because it is so expensive to do, for the reasons discussed.