Efficient allocation of network resources, such as available network bandwidth, has become critical as enterprises increase reliance on distributed computing environments and wide area computer networks to accomplish critical tasks. The widely-used TCP/IP protocol suite, which implements the world-wide data communications network environment called the Internet and is employed in many local area networks, omits any explicit supervisory function over the rate of data transport over the various devices that comprise the network. While there are certain perceived advantages, this characteristic has the consequence of juxtaposing very high-speed packets and very low-speed packets in potential conflict and produces certain inefficiencies. Certain loading conditions degrade performance of networked applications and can even cause instabilities which could lead to overloads that could stop data transfer temporarily. The above-identified U.S. Patents and patent applications provide explanations of certain technical aspects of a packet based telecommunications network environment, such as Internet/Intranet technology based largely on the TCP/IP protocol suite, and describe the deployment of bandwidth management solutions to monitor and manage network environments using such protocols and technologies.
An important aspect of implementing enterprise-grade network environments is provisioning mechanisms that address or adjust to the failure of systems associated with or connected to the network environment. For example, FIG. 1A illustrates a computer network environment including an application traffic management device 130 deployed to manage network traffic traversing an access link 21 connected to a computer network 50, such as the Internet, or an Internet Service Provider or Carrier Network. As one skilled in the art will recognize the failure of application traffic management device 130 will prevent the flow of network traffic between end systems connected to LAN 40 and computer network 50. To prevent this from occurring, one prior art mechanism is to include a relay that actuates a switch to create a direct path for electrical signals across the application traffic management device 130, when a software or hardware failure disables application traffic management device 130. In this manner, the application traffic management device 130 essentially acts as a wire, allowing network traffic to pass to thereby preserve network access. The problem with this approach is that, while network access is preserved, there is no failover mechanism to control or optimize network traffic while the application traffic management device 130 remains down.
To provide failover support that addresses this circumstance, the prior art included a “hot standby” mechanism offered by Packeteer, Inc. of Cupertino, Calif., for use in shared Ethernet network environments employing the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol. As FIG. 1B illustrates, redundant application traffic management devices 230a, 230b are deployed between router 22 and LAN 40. The inherent properties of the shared Ethernet LANs 40 and 41 meant that all inbound and outbound packets were received at both application traffic management devices 230a, 230b. According to the hot standby mechanism, one application traffic management device 230a (for instance) operated in a normal mode classifying and shaping network traffic, while the other application traffic management device 230b operated in a monitor-only mode where the packets were dropped before egress from the device. The application traffic management devices 230a, 230b were also configured to communicate with each other over LAN 40 and/or 41 to allow each device to detect when the other failed. When such a failure occurred, application traffic management device 230b previously operating in a monitor-only mode, could provide failover support in a substantially seamless fashion since its data structures were already populated with the information required to perform its function.
While the hot standby feature is suitable in shared Ethernet environments, the use of Ethernet LAN switches in more modern enterprise networks has presented further challenges. According to switched Ethernet environments, packets are directed through the intermediate network devices in the communications path to the destination host, rendering invalid the assumption upon which the hot standby mechanism is based. FIG. 2A illustrates a computer network environment implemented by LAN switches 23, where the end systems such as computers 42 and servers 44 are connected to different ports of a LAN switch 23. In the network environment of FIG. 2A, LAN switches 23 connect application traffic management devices 30a, 30b to router 22, as well as the end systems associated with an enterprise network. While the application traffic management devices 30a, 30b are deployed in a redundant topology, without the present invention, there is no mechanism that ensures that one application traffic management device can seamlessly take over for the other device should one fail.
Furthermore, many enterprise network architectures feature redundant topologies for such purposes as load-sharing and failover. For example, as FIG. 2B illustrates a typical enterprise network infrastructure may include a plurality of access links (e.g., 21a, 21b) connecting an enterprise LAN or WAN to an open computer network 50. In these network topologies, network traffic may be directed completely through one route or may be load-shared between alternative routes. According to these deployment scenarios, a given application traffic management device 30a or 30b during a given span of time may see all network traffic, part of the network traffic, or no network traffic. This circumstance renders control of network traffic on a network-wide basis problematic, especially when the application traffic management devices 30a, 30b each encounter only part of the network traffic. That is, each application traffic management device 30a, 30b, without the invention described herein, does not obtain enough information about the network traffic associated with the entire network to be able to accurately monitor network traffic and make intelligent decisions to control or shape the network traffic flowing through the corresponding access links 21a, 21b. In addition, if a given application traffic management device 30a, 30b sees no traffic for a period of time and the active route fails (for example), the application traffic management device deployed on the alternate route essentially becomes the master controller but possesses no prior information about existing flows or other network statistics. This circumstance often renders it impossible to adequately classify data flows associated with connections active at the time of a change or failover in the active application traffic management device.
Synchronization of network devices in redundant network topologies also presents certain technical challenges in the realm of data compression. Data compression, caching and other technologies that optimize network traffic are often deployed to improve the efficiency and performance of a computer network and ease congestion at bottleneck links. For example, implementing data compression and/or caching technology can improve network performance by reducing the amount of bandwidth required to transmit a given block of data between two network devices along a communications path. Data compression technologies can be implemented on routing nodes without alteration of client or server end systems, or software applications executed therein, to reduce bandwidth requirements along particularly congested portions of a communications path. For example, tunnel technologies, like those used in Virtual Private Network (VPN) implementations, establish tunnels through which network traffic is transformed upon entering at a first network device in a communications path and restored to substantially the same state upon leaving a second network device. However, issues concerning synchronization of compression dictionaries, as well as the additional overhead in updating flow control statistics in light of packet compression, present certain technical challenges.
In light of the foregoing, a need in the art exists for methods, apparatuses, and systems that facilitate the synchronization of network traffic compression mechanisms deployed in redundant network topologies. Embodiments of the present invention substantially fulfill these needs.