Enterprises have become increasingly dependent on computer network infrastructures to provide services and accomplish mission-critical tasks. Indeed, the performance, security, and efficiency of these network infrastructures have become critical as enterprises increase their reliance on distributed computing environments and wide area computer networks. To that end, a variety of network devices have been created to provide data gathering, reporting, and/or operational functions, such as firewalls, gateways, packet capture devices, bandwidth management devices, application traffic monitoring devices, and the like. For example, the TCP/IP protocol suite, which is widely implemented throughout the world-wide data communications network environment called the Internet and many wide and local area networks, omits any explicit supervisory function over the rate of data transport over the various devices that comprise the network. While there are certain perceived advantages, this characteristic has the consequence of juxtaposing very high-speed packets and very low-speed packets in potential conflict and produces certain inefficiencies. Certain loading conditions degrade performance of networked applications and can even cause instabilities which could lead to overloads that could stop data transfer temporarily. In response, certain data flow rate control mechanisms have been developed to provide a means to control and optimize efficiency of data transfer as well as allocate available bandwidth among a variety of business enterprise functionalities. For example, U.S. Pat. No. 6,038,216 discloses a method for explicit data rate control in a packet-based network environment without data rate supervision. Data rate control directly moderates the rate of data transmission from a sending host, resulting in just-in-time data transmission to control inbound traffic and reduce the inefficiencies associated with dropped packets. Bandwidth management devices allow for explicit data rate control for flows associated with a particular traffic classification. For example, U.S. Pat. No. 6,412,000, above, discloses automatic classification of network traffic for use in connection with bandwidth allocation mechanisms. U.S. Pat. No. 6,046,980 discloses systems and methods allowing for application layer control of bandwidth utilization in packet-based computer networks. For example, bandwidth management devices allow network administrators to specify policies operative to control and/or prioritize the bandwidth allocated to individual data flows according to traffic classifications. In addition, network security is another concern, such as the detection of computer viruses, as well as prevention of Denial-of-Service (DoS) attacks on, or unauthorized access to, enterprise networks. Accordingly, firewalls and other network devices are deployed at the edge of such networks to filter packets and perform various operations in response to a security threat. In addition, packet capture and other network data gathering devices are often deployed at the edge of, as well as at other strategic points in, a network to allow network administrators to monitor network conditions.
Enterprises network topologies can span a vast array of designs and connection schemes depending on the enterprise's resource requirements, the number of locations or offices to connect, desired service levels, costs and the like. A given enterprise often must support multiple LAN or WAN segments that support headquarters, branch offices and other operational and office facilities. Indeed, enterprise network design topologies often include multiple, interconnected LAN and WAN segments in the enterprise's intranet, and multiple paths to extranets and the Internet. Enterprises that cannot afford the expense of private leased-lines to develop their own WANs, often employ frame relay, or other packet switched networks, together with Virtual Private Networking (VPN) technologies to connect private enterprise sites via a service provider's public network or the Internet. Some enterprises also use VPN technology to create extranets with customers, suppliers, and vendors. These network topologies often require the deployment of a variety of network devices at each remote facility. In addition, some network systems are end-to-end solutions, such as application traffic optimizers using compression tunnels, requiring network devices at each end of a communications path between, for example, a main office and a remote facility.
Denial-of-Service (DoS) attacks are a common concern among network administrators. For example, a distributed denial-of-service (DDoS) attack is one in which a multitude of compromised hosts attack a single target, such as a web server, by transmitting large numbers of packets to deny service for legitimate users of the targeted system. Specifically, the veritable flood of incoming messages to the targeted system essentially forces it to shut down, thereby denying services of the system to legitimate users. A hacker, for example, may implement a DDoS attack by identifying and exploiting vulnerabilities in various end systems that are reachable over the Internet. For example, a hacker may identify a vulnerability in one end system connected to a network, making it the DDoS “master.” It is from the master system that the intruder identifies and communicates with other systems connected to the network that can be compromised. The DDoS master installs hacking tools on multiple, compromised systems. With a single command, the hacker can instruct the compromised hosts to launch one of many DoS attacks against specified target systems.
The DoS attacks launched by the compromised systems can take a variety of forms. Common forms of denial of service attacks, for example, include buffer overflow attacks and SYN attacks. In a buffer overflow attack, compromised systems send more network traffic to a network address than the data buffers supporting the targeted system can handle. Certain buffer overflow attacks exploit known characteristics of the buffers supporting a given network application, such as email servers. For example, a common buffer overflow attack is to send email messages with attachments having large file names. The large attachment file names quickly flood the buffer associated with common email applications. Other buffer overflow attacks involve the transmission of other types of packets, such as Internet Control Message Protocol (ICMP) packets and Distributed-Component Object Model (DCOM) packets.
So-called SYN attacks are also common. When a session is initiated between a Transport Control Program (TCP) client and TCP server, a very small buffer space exists to handle the usually rapid “hand-shake” messages that sets up the TCP connection. The session-establishing packets include a SYN field that identifies the sequence in the message exchange. An attacker can send a number of connection requests very rapidly and then fail to respond to the reply. This leaves the first packet in the buffer so that other, legitimate connection requests cannot be accommodated. Although the packet in the buffer is dropped after a certain period of time without a reply, the effect of many of bogus SYN packets is to make it difficult for legitimate requests for a session to get established.
In addition to posing a problem for the targeted end systems, these DoS attacks also create problems for network devices, such as application traffic management systems, disposed at the edge of enterprise networks and/or at a point in the communications path between a compromised end system and a targeted system. For example and referring to FIG. 1, assume for didactic purposes, that end systems 42 on network 40 have been comprised and have initiated a DoS attack against targeted system 43. As discussed above, the compromised end systems 42 transmit a large number of ICMP or SYN packets, for example, to the targeted system 43. An application traffic management device 30, for example, encounters these packets and, pursuant to its configuration, processes the packets as part of its application traffic management functions. Processing the inordinate number of packets from the compromised end systems, however, quickly overwhelms the capacity of the network device 30, such as the system bus, and central processing unit (CPU), requiring that a large number of packets be dropped. One prior art load shedding mechanism is referred to as Random Early Discard (RED). According to such Random Early Discard mechanisms, packets are chosen at random for discard to shed the load placed on application network device 30 by the DoS attack.
The use of Random Early Discard mechanisms can be problematic. For example, random early discard techniques adversely affect the flow of legitimate network traffic. Indeed, random early discards may actually exacerbate the problem due to additional network traffic associated with re-transmissions of the dropped packets. Beyond regular network traffic, the packets randomly chosen for discard may include Web User Interface (WUI), or Command Line Interface (CLI), session packets intended for application network device 30, rendering it difficult or impossible for network administrators to access the device 30 at such a critical time. For instance, this circumstance may render it difficult for a network administrator to receive diagnostic or monitoring data from application network device 30, and/or to configure application network device 30 in a manner that responds to the DoS attack.
In addition, even with random early drop mechanisms, the system resources of network device 30 can be severely impacted. For example, inbound packets received at network device 30 typically consume device resources, such as the available bandwidth across the system bus of network device 30, before being discarded. This circumstance ties up system resources for other processing tasks. For example, by consuming large amounts of bandwidth across the system bus, the large number of inbound packets adversely affect the processing of network traffic and the egress of packets from network device 30. Traffic or packet through-put is affected, therefore, while network device 30 waits for system resources to become available.
In light of the foregoing, a need in the art exists for methods, apparatuses and systems directed to enhanced load shedding mechanisms that address the foregoing limitations. For example, a need in the art exists for methods, apparatuses and systems enabling preferential packet load shedding mechanisms that reduce the chance that legitimate network traffic is dropped during a DoS attack or other event where one or more hosts generate a disproportionate amount of network traffic. A need also exists in the art for methods, apparatuses and systems that facilitate access to network devices during DoS attacks or other similar events. A need further exists in the art for packet load shedding mechanisms that reduce the impact on system resources. Embodiments of the present invention substantially fulfill these needs.