A legacy load balancer, for example, uses a configuration of HaProxy in linux to read cookies or URL descriptions contained in each HTTP request from a network, and rewrite a header based on these pieces of information and send the HTTP request to a backend server cluster, so that a balanced state of traffic and resource consumption is achieved for each server in the backend server cluster. But the legacy load balancer would not automatically perform a filtering check on traffics from the network, and cannot perform throttling or discarding on traffics considered as cyber attack.
It is known in the prior art that there is a solution for achieving protection against ICMP/TCP/UDP flooding by detecting TCP packets based on flow cleaning technology, which is achieved by retransmitting TCP/UDP packets. But the known solution is only effective for packets from a TCP/UDP layer, and is helpless for preventing HTTP flooding from an application layer required for the decryption, which is the seventh layer in Open System Interconnection (OSI) Reference Model.
One conception is to analyze an access to URL, and limit access requests according to the number of access requests per unit time such as QPS. The analysis on the access to the URL of a large scale website usually consumes a lot of memory. In general, it is required to record a timestamp of each access associated with any combination of data fields such as an IP address, a user identification (USERID) and a uniform resource locator (URL). When it is required to calculate the QPS, respective time points are filtered out or sorted, which consumes time as well as consumes a memory space.
In the prior art, when it is required to calculate the QPS, the respective time points are filtered out or sorted, which consumes the time as well as consumes the memory space.