The present disclosure relates generally to the classification or identification of internet devices as “hostile” or “benign” devices. Devices include but are not limited to general computers executing software (such as a web browser) engaging as clients in the hypertext transfer protocol (HTTP) with one or more servers (web servers). Hostile devices are those that are or will be engaging in denial of service attacks. Denial of service attacks attempt to disrupt, destroy, or degrade the services provided by a web server by overwhelming it with a very large rate of requests. The present disclosure provides systems and methods for classifying devices as hostile (attackers) or not, thereby permitting the mitigation of denial of service attacks by device-based filtering of requests.
We will elaborate further on the background of denial of service (DoS) attacks on the public Internet here. The communications protocols controlling the operation of the Internet are known as the “Internet Protocol” (IP) suite, and commonly assigned a system of “layers” numbered from one through seven (1-7) according to the International Telecommunications Union (ITU) (1994). Briefly, the seven layers are 1) Physical 2) Data Link 3) Network 4) Transport 5) Session 6) Presentation and 7) Application. Even though the IP protocol suite does not explicitly call out the 7-layer OSI (Open Systems Interconnection) model, the OSI model is used when discussing IP protocols nonetheless in common practices. Here, the attention is on “Layer 7” or the “Application Layer.” The hypertext transfer protocol (Internet Engineering Task Force (IETF), 1999) is an application layer or layer-7 protocol that itself is comprised of other lower-level protocols, specifically the Transmission-Control-Protocol/Internet-Protocol (TCP/IP). The present disclosure is directed towards managing and mitigating DoS attacks using the HTTP protocol (or its encrypted transport version, HTTPS). In subsequent discussion of the background, “DoS attack” will mean generally attacks using the HTTP protocol.
The objective of a DoS attack is to deny a web site or service to its legitimate users by making it unavailable through an excessive volume or rate of HTTP requests. This is known as a flooding attack or a basic volumetric attack. These attacks seek to exhaust resources of the target such as network bandwidth, processor capacity, memory, connection pools, and so on. Attacks may be more subtle as well, exploiting a semantic or computational vulnerability of the target. For example, DoS attacks on a web site's “search” function are semantic in nature, since the attacker infers that some back-end database will be involved in processing the search query and may be overloaded more easily.
The attacker accomplishes the DoS attack by using a collection of computers known as “bots” or “zombies” (Mirkovic, 2008). These are computers that have been exploited through various viruses, malwares, or security vulnerabilities such that they can be remotely controlled by the attacker to carry out commands. These computers are referred to in the descriptions herein as ‘hostile devices.’ The bots altogether are known as a “botnet.” Botnets range in size from a few thousand computers to a million or more (Buscher, 2012). This provides the attacker with a significant computing power advantage over his target, where he is able to bring to bear a large and potentially debilitating number of requests against the targeted web servers or front-end devices (this is an asymmetric cyberwarfare method). Significantly, the actual software executing on a bot that is part of a DoS attack must be an HTTP client. However, unlike the typical web browsers used by people (Internet Explorer, FireFox, Chrome, Safari, Opera, et cetera), these bot web clients will generally (but not always) not execute any client-side scripting languages, including but not limited to ECMAScript, JavaScript, Jscript, and so on. The reason for this is that the bot has no interest in the user-experience features these languages provide, and executing them will only delay the bot from sending its next request in the flood. However, bots are often capable of accepting and returning information tokens such as, without limitation, cookies (IETF, 1999) or Uniform Resource Locator (URL) query parameters.
Client puzzles or proofs-of-work have been used as part of many systems and protocols for addressing DoS attacks. CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), usually a visual puzzle, are a type of client puzzle that is not precisely a proof of work; rather, a successful answer to a CAPTCHA classifies the responder as a human. However, many CAPTCHA schemes are considered ‘broken’ through the use of adversarial optical character recognition (A-OCR) techniques (e.g., Pretend We're Not a Turing Computer but a Human Antagonist(PWNTCHA)), http://caca.zoy.org/wiki/PWNtcha).
Mathematical proofs-of-work are generally considered more robust, and their functional requirements have been well-documented (Laurens, 2006). In general, a mathematical proof-of-work must satisfy the following requirements: 1) have a solution 2) server computational costs for puzzle generation and verification must be orders of magnitude smaller than client computational costs required to find a solution 3) puzzles must be solved in a limited amount of time 4) pre-computation of solutions in advance should be infeasible 5) previous solutions do not assist in finding future solutions 6) the puzzle-issuance and puzzle-verification process should not require any persistence of the connection state between the server and client. The present systems and methods satisfy these requirements. In addition to these requirements identified by (Laurens, 2006), puzzles (challenges), their solutions, and related parameters (expirations times, internet domain of applicability) must all be either tamper-proof or at least tamper-evident (Gutzmann, 2001). This requirement prevents clients from substituting easy puzzles for hard ones, for example. The present systems and methods satisfy this requirement through the use of message authentication codes (MAC).
A final requirement for layer-7 systems is that they must operate in a manner that does not affect the content delivery and rendering at the client web browser. In Feng's approach (Feng, U.S. Pat. No. 8,321,955), URLs are rewritten dynamically by a client-side JavaScript component and additional JavaScript is injected to the pages; thus, what is finally rendered at the client web browser is not what would be sent by a server in the absence of the proof-of-work protocol element. With the pervasive, complex, and unavoidable use of JavaScript today, Feng's approach potentially disrupts the proper rendering of the content due various reasons such as namespace collisions, function overloads, and modification of execution sequences of scripts. The present systems and methods use a short-duration ticketing approach that permits unmodified delivery of server content to the browser, thereby avoiding that problem.
While there are many possible proof-of-work systems, most commonly used are what are known as “hash pre-image” puzzles. In this type of puzzle, a client must find a concatenation of a server challenge and client ‘solution’ such that that pre-image (input string) to a one-way function (including but not limited to a ‘hash’ function, such as Message-Digest-5 (MD5), Secure Hash Algorithm (SHA), Secure Hash Algorithm 256 (SHA256), etc) that produces a server-specified number of leading zero bits in the output. The number of leading bits is typically called the puzzle difficulty. More difficult puzzles require larger computational costs and time on the client side; adjustments of the difficulty level have the effect of ‘throttling’ the client since it cannot make HTTP requests while involved in finding a solution. Variations on the basic hash pre-image puzzle exist. Some require that the leading bits evaluate to a number less/greater than some specified value (Rangasamy, 2011).
Much work has been done on DoS attacks at the lower layers of the communication model (e.g., Hill, U.S. Pat. No. 6,088,804; Cox, U.S. Pat. No. 6,738,814) since layer 3-4 attacks (e.g., the Transmission Control Protocol—Synchronize “TCP SYN” attack) form a large part of the DoS attacker's arsenal. At the application layer, Feng and Kaiser (Feng et al. U.S. Pat. No. 8,321,955) developed a DoS mitigation approach using a proof-of-work client puzzle for both layer 3-4, and for layer 7. Much of the disclosure in Feng's patent (U.S. Pat. No. 8,321,955) is focused on the lower network layers, but a final section discusses a client puzzle (a.k.a. proof-of-work) for an HTTP server. In Feng's system and method, the claims are made for prioritizing traffic based on its past history of traffic rate or volume. All traffic is accepted in that system, but is directed to a high or low priority service running in a single computer. In the details of the embodiment of Feng's disclosure a probabilistic data structure (“Bloom Filter”) is used to tally historical request counters on a per Internet Protocol address (IP address) basis. One of several drawbacks of Feng's system is that it works on a single computer only due to the local data structure, the Bloom filter, used to determine puzzle difficulty based on request history. In practice, almost all large and commercially significant web sites are complex distributed systems (similar to elements 101-103 in FIG. 1). The need for distributed shared memory arises then, and this is addressed by the present disclosure. Feng's use of IP address alone also presents practical problems; for example, many internet service providers (e.g., Verizon, America Online) operate caching proxy servers as intermediaries between HTTP clients and servers. From the web server's point of view, then, there is a single IP address (that of the proxy) behind which there may be many thousands of distinct HTTP clients (“devices” as we call them here). Using Feng's request-counting approach, the proxy IP will be incorrectly de-prioritized thus delivering a poor user experience to all clients behind the proxy. That drawback is addressed by the current disclosure's use of device fingerprinting methods which differentiate many devices sharing the same IP address (this also is applied to the classification of bots that are polymorphic in their device fingerprint).