The invention finds one particularly beneficial application in protecting network services experiencing excessively high bit rates caused by traffic peaks or denial of service attacks.
Providing services and specifying a certain quality of service in a packet network, for example an IP network, is known in the art. The quality of service is expressed by a maximum authorized bit rate or bandwidth, for example.
Various mechanisms are used to monitor the volume of incoming traffic and to manage and adapt the bandwidth. These techniques include the rate limiting technique that monitors the volume of traffic going to a service and the bit rate at which it is transmitted in order to comply with the limit on traffic going to the service. Existing rate limiting algorithms are based on using packet queues.
A first known rate limiting algorithm is the leaky bucket algorithm for regulating traffic arriving at a queue in order to output a fixed flow to the network. The algorithm monitors the transmission time intervals of the packets so that the effective bit rate of packets sent to the service over the network complies with the specified target bit rate. Thus if the packets sent to the service are large, the time intervals between packets are increased, to comply with an average bit rate compatible with the target bit rate. The average bit rate is calculated as a weighted average of the bit rates obtained while sending the last N packets and while waiting between two transmissions. This algorithm is very suitable for sending fixed bit rates over the network, but it does not enable more resources to be used when the network is relatively lightly loaded.
A second known rate limiting algorithm is the token bucket algorithm that uses tokens corresponding to authorizations to send a certain volume of data to a service over the network. The algorithm regularly fills a token queue. If the target bit rate is 1 Mbyte/s and a token corresponds to a right to transmit over the network at 1 kbyte/s, the queue is filled with tokens at the rate of 1000 tokens per second. In parallel with this, the algorithm uses a second queue for packets to be sent. A packet of size S is sent over the network if there are sufficient tokens of size J in the token queue to transmit S bytes over the network. For the packet to be sent it is therefore necessary for there to be at least N tokens in the token queue, where N=S/J. This mechanism enables traffic to be sent so long as there are tokens in the queue, which may have accumulated when the network was relatively lightly loaded. That mechanism is therefore suitable for managing traffic peaks.
The above algorithms process a flow or a type of flow to the service. If the queues reach saturation point, packets are discarded, wherever they originate. Thus those algorithms do not offer a sufficiently refined level of processing to discriminate different types of contributors, namely small contributors using the bandwidth within acceptable limits and large contributors exceeding the acceptable limits.
The published US application 2006/0036720 discloses a rate limiting method applied to instances of events of certain types. One example of an event type is a DNS (domain name service) protocol message, and one example of an instance of that type of event is the identifier of the source that sent a message of that type. Rate limiting then consists in taking action (for example discarding the message) if the number of instances of events exceeds a predefined threshold. The method can therefore limit traffic coming from a contributor. However, the threshold that triggers rate limiting is fixed a priori. Thus the method cannot adapt rate limitation as a function of the actual use of the bandwidth. In fact, with the method described, if a contributor sends traffic to the service with a bit rate above the set threshold, the contributor's traffic will be rejected even though the bit rate of the traffic may comply with the target bit rate for the service, especially if no other contributor is sending traffic to the service at the same time.
There is therefore a requirement for a mechanism to limit traffic going to a network service in compliance with the target bit rate for the service, the mechanism giving small contributors to the service preference over large contributors by defining and applying a threshold between small contributors, whose traffic is authorized and forwarded to the service, and large contributors, whose traffic is rejected, which threshold can evolve according to the current actual traffic to the service.