A service provider has benefits from strong business support capabilities provided by the Internet, while faces harsh malicious attacks from the public network. A large class of attacks among the attacks access services through rapid and repetitive executions using batch programs, such as batch registration, batch posting, brush ranking, spiking, and copying websites with a crawler, thereby impersonating a batch of users' behaviors with these batch operations. Taking batch registration programs as an example, if service providers do not implement restrictions, a batch registration program can register thousands of counterfeit users in one hour by parallel execution in a personal computer. These counterfeit users can be further used to obtain illegal gains.
Requests for these batch operations consume a lot of computing resources of service providers, however introduce inefficient flow, reduced service performance, and impacted access from normal users. In order to ensure proper operation of services, service providers need to consider how to restrict such kinds of batch operations (often referred to as “anti-brush”), such that service resources can serve normal users.
Currently, general solutions for restricting batch operations (anti-brush) include the following methods:
1. Restrictions at a network layer: implementing policy control on a request frequency at a network layer, e.g.
(1) Determine specific restrictions according to access conditions for IP addresses and ports, such as a number of accesses allowed per unit time;
(2) Determine rules and policies according to a HTTP header, such as restricting a number of IP addresses from which URL is accessed per unit time and evaluating for information such as Cookie.
(3) Change settings of a browser terminal to block repetitive requests using technologies such as Cookie change and Javascript, e.g., restriction on http_referer for anti-leech, restriction on http_user_agent for anti-crawler, restriction on request_method for method, and restriction on http_cookie for forbidding visitors which do not carry correct cookies.
2. Restrictions at an application layer: actively controlling access behavior through programs, e.g.
(1) Restriction on a number of accesses per unit time;
(2) Set a time interval between accesses;
(3) Set block time;
(4) Set a black list and/or white list;
3. Shielding accesses from automatic programs by a reverse Turing test (CAPTCHA, a verification code, etc.). Generally, an open question is set, which can be easily recognized by humans but hardly resolved by a machine. Batch requests from programs are restricted by mandatory requirements of humans to answer questions. Currently, popular tests with verification code include picture recognition, answer to random questions, voice verification, and so on.
4. Verification with short messages (SMS) that a verification code is sent from the service to a user's mobile phone and the user is required to enter the verification code prior to completing a request.
Many deficiencies are present in the above methods, and are analyzed hereinafter briefly:
Restrictions on access frequencies implemented at a network layer can be easily bypassed, and meanwhile have a very high rate of false blocking. For example, a large number of NAT architectures are present currently and IP addresses of visitors collected at a server are same, it is thus not feasible to implement restrictions based on the access frequency. Restrictions can be easily bypassed using a proxy technology or by forging an http_cookie and an IP address. In addition, control rules need to be configured for actively controlling access through programs, and it is difficult to control validity of rules and to set a suitable black list and/or while list. Inappropriate control rules may reduce usability of service. For example, the usability of service may be reduced to some extent when a number of accesses per unit time is set.
Checking by a verification code is the most common and mature solution at present, and is widely used. However, the validity of a verification code depends on whether a machine can recognize and answer questions effectively. Making a problem to be over difficult results in inconveniences to a user. However, with evolution of machine intelligence, problems with low difficulty cannot prevent a machine from automatic recognition effectively. Progress of OCR technology reduces effectiveness of a kind of tests based on image recognition for distorted characters. Progress of machine intelligence reduces reliability of tests for automatic answering questions by a machine. In addition, a verification code lowers users' experience, and causes great inconveniences to color-blind or elderly people.
SMS-based authentication has very high reliability, and however has many restrictions, which requires a user to bind his/her mobile phone and may result in additional cost for sending messages and cumbersome user's operations.