The present invention relates to a server load sharing system in which server-to-client communications via an IP (Internet Protocol) network such as the Internet or Intranet are performed in a way that keeps load sharing to servers.
A conventional load sharing method is exemplified by multilink PPP (Point-to-Point Protocol) and a multi-cost equal path.
Multilink PPP is defined as a method (see RFC1990) used in ISDN (Integrated Services Digital Network) etc and schemed to, if a line speed is low, provide a plurality of links (physical lines) between adjacent systems and sharing loads to these links. For example, multilink PPP is capable of providing communications having a bandwidth of 128 Kbps (transmission speed) by use of 2B channels (64 Kbps) in ISDN.
Further, the multi-cost equal path is defined as a method of leaving such a plurality of paths as to minimize the cost to the destination in dynamic routing based on a routing protocol and sharing loads among these paths. This multi-cost equal path can be utilized in this way in OSPF (Open Shortest Path First) defined in RFC1247.
These load sharing methods are effective in a case where every packet may go through whatever link.
Yet another conventional load sharing method may be a scheme of sharing the loads to a plurality of servers by making use of at least one server load balancer (a server load sharing device). The server-load-balancer-based server load sharing has a restrain that the packets belonging to the same session must be forwarded via the same link in order for the session not to be disconnected.
FIG. 1 is a diagram showing one example of a first conventional server load sharing system using one server load balancer. Referring to FIG. 1, a server load balancing function of this server load balancer is classified into a load sharing function (L4 (Layer 4) load balancing function) based on categories of protocols equal to or higher than the transport layer in the OSI reference model and a load sharing function (L7 (Layer 7) load balancing function) based on information (L7 information) on the application layer in the OSI reference model.
Namely, the L4 load balancing function is that a forwarding target packet transmitted from a client terminal and received via an IP network (the Internet or Intranet) and an L3 switch is identified based on a category of L4 protocol, i.e., a port number in TCP (Transmission Control Protocol) or UDP (User Datagram Protocol) as an E-mail (POP: Post Office Protocol or SMTP: Simple Mail Transfer Protocol) or as a file transfer (FTP: File Transfer protocol) or as a reference to Web page (HTTP: Hyper Text Transfer Protocol), and this packet is distributed to any one of the plurality of load sharing target servers.
Moreover, the L7 load balancing function is that the forwarding target packet is distributed to any one of the plurality of load sharing target servers according to L7 information such as a response time or URL (Uniform Resource Locator).
The L3 switch serving as a network relay device is defined as a switch or router for connecting the networks to each other at a network layer level in the OSI reference mode.
The first server load sharing system involving the use of one server load balancer is incapable of installing a plurality of server load balancers in terms of the restraint that the restraint should not be disconnected. Accordingly, if a great quantity of packets flow in the server-to-client communications, this server load balancer is inevitable to be a bottleneck to the processing speed.
FIG. 2 shows one example of a second server load sharing system involving the use of a plurality of server load balancers.
The second server load sharing system installs an L4 switch (Layer 4 switch) at a stage anterior to these server load balancers to enable the plurality of server load balancers (1, b) to be provided.
The L4 switch serving as a network relay device is defined as a switch or a gateway for connecting the networks to each other in a way that performs conversions between different categories of protocols ranging from the transport layer (L4) to the application layer (L7) while absorbing a difference between the networks.
Accordingly, this L4 switch uses the L4 load balancing function to distribute the packets to the plurality of server load balancers. In this example, the L4 switch distributes the packet, if the protocol for the forwarding packet is FTP, to the server load balancer (a) and, if the protocol is HTTP, to the server load balancer (b), respectively.
Each server load balancer is capable of distributing the forwarding target packet to any one of the plurality of load sharing target servers by using the L7 load balancing function through the L3 switch.
The second server load sharing system described above is capable of installing the plurality of server load balancers and therefore basically decreasing the possibility that each server load balancer becomes the bottleneck to the processing speed. If a traffic of the packets flowing in the server-to-client communications concentrates depending on the protocol, it is still inevitable that the corresponding server load balancer becomes the bottle neck to the processing speed.