Various types of systems have been developed for handling unwanted network data transmission incorporating a number of different technologies. U.S. Pat. No. 5,581,559 issued to Crayford et al. discloses a method that verifies the integrity of data transmitted over a network by comparing the destination address for a data packet with end station addresses stored on network repeaters. Where the destination address fails to match the stored end station addresses, the data packet will be disrupted.
U.S. Pat. No. 6,044,402 issued to Jacobson et al., describes a system in which the only data packets that are transmitted between source and destination network addresses are those that satisfy the blocking policies stored by the blocking data structure. Thus only, “pre-approved” data can flow through such a control mechanism. U.S. Pat. No. 5,455,865, issued to Perlman discloses a system that relies upon a stored list of acceptable packet identifiers at each node in the network. U.S. Pat. No. 5,353,353 issued to Vijeh et al. describes a system that determines the acceptability of data packets based upon a destination address/source address match and will disrupt any packet not satisfying these criteria. U.S. Pat. No. 5,850,515 issued to Lo et al. discloses a system that uses source and destination address matching to determine if packets should be transmitted to an end station or the end station disabled from participating in the network. It also employs a system where an end station can be disabled by a program that determines that a certain number of unauthorized packets have been detected. While other variations exist, the above-described designs for handling unwanted network data transmissions are typical of those encountered in the prior art.
U.S. Pat. No. 5,367,523 to Chang et al. discloses an end-to-end, closed loop flow and congestion control system for packet communications networks which exchanges rate request and rate response messages between data senders and receivers to allow the sender to adjust the data rate to avoid congestion and to control the data flow. Requests and responses are piggy-backed on data packets and result in changes in the input data rate in a direction to optimize data throughput. GREEN, YELLOW and RED operating modes are defined to increase data input, reduce data input and reduce data input drastically, respectively. Incremental changes in data input are altered non-linearly to change more quickly when further away from the optimum operating point than when closer to the optimum operating point. Chang, et al, is intended for end-to-end congestion control. Congestion control assumes cooperation between sender and receiver in solving the problem. In a packet flooding defense, the sender, who is the attacker, will never cooperate with the receiver, his victim. In Chang, et al, the information used is the source/destination address pairs in the packet. Chang, et al, assume this information is accurate. In an attack, this information will not be. The attacker will falsify the source address in order to confound the defense if it uses information the attacker controls, such as the source address.
The primary objective of the present invention is to defend against “packet flooding attacks” in which an attacker tries to use up all the bandwidth to the victim by sending data of little or no value (at least to the victim), thereby making more valuable communication with the victim slow or unreliable. A secondary objective is to defend against a related class of attacks in which the attacker tries to use up some other resource by sending more requests of some particular type to the victim than the victim can handle.
One way to view all these attacks is that a resource is being allocated in an unfair way. Well-behaved users request reasonable amounts, while attackers request unreasonable amounts. The most straight-forward allocation mechanism, which might be called “first come first served”, ends up allocating almost all of the resource to the attackers. A more “fair” allocation would reduce the impact of an attacker to that of a normal user.
There are two obvious impediments to the “fair service” goal above. One is lack of a reliable way to associate incoming packets with those users among whom bandwidth should be fairly allocated. The other is lack of control over what packets arrive. The solution described here to both of these problems requires help from the routers that forward packets to the victim.
The defense is distributed among cooperating sites and routers. A set of transitively connected cooperating machines is called a “cooperating neighborhood”. The quality of the defense is related to the size of the cooperating neighborhood, a larger neighborhood providing better defense. Within the neighborhood it is possible to trace the forwarding path of packets. The association of packets with the “users” is approximated by associating packets with “places” in the cooperating neighborhood from which those packets are forwarded. That is, service will be allocated in a fair (or otherwise reasonable) manner among these places. A “place” in this sense is typically a particular interface from which a packet arrived at a cooperating router.
One such place is likely to be shared by many actual users. An attack will deny service to those users sharing the same place. The advantage of a large number of such places is that each place is shared by fewer users, so an attack will deny service to fewer users. It is advantageous to a user who wants to communicate with a particular machine, to be in the cooperating neighborhood of that machine, since no attacker from another machine can deny him service. Conversely, an attacker wishing to deny service to as many users as possible prefers to share an entry point into the cooperating neighborhood with as many users as possible.
Routers will supply data about the forwarding path of the packets that arrive at a site. The site can use this data to allocate service as described above among the packets that arrive. This would solve the problem of unfair service if the packets that arrived were a fair sample of those that were sent to the site. This may not be the case, however, if routers are unable to forward all the packets they receive. To some extent fair service is limited by network topology, i.e., too many legitimate users trying to share parts of the same path will inevitably suffer relative to users of uncrowded paths. However another potential cause for this problem is a flooding attack against a router. That problem is solved by letting routers allocate their services in a similar way to that described above for sites. That is, they allocate the limited resource of forwarding bandwidth along any given output in a reasonable way among some set of places in the cooperating neighborhood.
The final step in the defense is that cooperating routers will limit the rate at which they forward packets to places that so request. This may not be essential in the allocation of service, but it is useful for limiting the bandwidth used by “unwanted” packets. The rate-limiting request is to be made when a site detects a high rate of unwanted packets coming from one place. This helps the site because it no longer has to process as many unwanted packets. It helps the network by freeing some of the bandwidth for other use.
Even if the traffic is not reduced, the distinction between “wanted” and “unwanted” packets plays an important role in “reasonable” allocation. For a site there are normally some packets (in fact, the great majority) that are expected in a very strong sense. It is reasonable to process these at the highest possible rate. All other packets are not exactly unwanted, but the site is willing to process them at only a limited rate. A reasonable approach is to schedule these as described above (using the places from which they were forwarded) at a limited rate, and regard as “unwanted” those that end up being significantly delayed (or discarded).