It has become commonly accepted that network level firewalls and intrusion prevention systems (IPS) are inadequate in the protection of application threats and vulnerabilities. Web application firewalls (WAFs) are designed to operate at the application layer and to inspect ongoing traffic. WAFs have been employed to overcome web application vulnerabilities, such as parameter manipulation, structured query language (SQL) injection, cookie poisoning, and so on.
Currently available WAFs must differentiate between legal and illegal traffic, in order to decide which sessions to block. With this aim, a WAF should know the behavior of the application it must protect; a behavior of an application is adaptively learned through an analysis of representative traffic. However, an adaptive learning approach has been proven to be always prone to false positives (i.e., legal behavior is tagged as illegal) and false negatives (i.e., illegal behavior is tagged as legal). As a result, the adaptive learning is often used alongside black lists, i.e., the blocking of previously detected attacks by means of “attack signatures”. Nevertheless, signature-based detection cannot handle zero-day-attacks or many malicious attacks initiated through legitimate actions (e.g., a bank manager authorizing a loan versus a customer authorizing that same loan).
FIG. 1 depicts a typical computing environment for execution of a web application by a web client 110 and a web server 120 connected over a network. A web session typically consists of a web client 110 and server 120, communicating, for example, over HTTP or HTTPS. The client 110 sends HTTP messages consisting of requested URLs. The server 120 responds to HTTP messages with client side application code which is generally in the form of HTML pages, JavaScripts, and cascading style sheets (CSS). The client side application code typically includes both display information and executable code, and may be self triggered or activated by a user. The execution of the client side application code by the client 110 generates HTTP messages sent back to the server 120. The payload of these messages may include parameters to be received and processed by server side application code.
In the environment illustrated in FIG. 1, both the client 110 and server 120 are vulnerable to harmful attacks. The client 110 is vulnerable as it executes software code received from an un-trusted server 120. The server 120 is vulnerable as it processes un-validated requests and parameters (which may also include software code) from un-trusted clients (e.g., client 110). The attacks may include, for example, the infestation of client machines with malware and/or spyware, phishing attacks used for identity theft and data theft, and so on.
Currently available secure web gateways (SWGs) protect clients from web based attacks by using various methods for detecting and blocking out malicious web attacks. These methods include signature-based detection, URL filtering methods, and static code analysis. All of these methods need previous knowledge of attacks or attack templates and have proven to be quite ineffective in the dynamic environment of the Internet.
Therefore, it would be advantageous to provide a solution that would cure the deficiencies of existing web applications security solutions.