Fast Flux Service Networks use the fast flux technique to hide servers hosting malicious content from Internet Service Providers who might otherwise close down the web sites hosted by the malicious web servers. The fast flux technique is a DNS technique used by botnets to hide, for example, phishing and malware delivery sites behind an ever-changing network of compromised hosts acting as proxies. It can also refer to the combination of peer-to-peer networking, distributed command and control, web-based load balancing and proxy redirection used to make malware networks more resistant to discovery and counter-measures. The Storm Worm is one of the recent malware variants to make use of this technique.
There have been several attempts to specify ways in which such behaviour can be detected and/or mitigated against. Most of these, which concern detecting fast flux service networks, operate by looking for characteristic properties of the website (e.g. it's registered domain owner) or of the result of performing DNS queries for the website's URL (especially in examining the DNS A records for the URL etc.). The two papers discussed below are representative of such approaches.
Thus, a paper entitled ‘Fast Flux Service Networks: Dynamics and Roles in Hosting Online Scams” by Maria Konte and Nick Feamster, studies the fast flux behaviour and in it the authors report that they noticed through their studies that the fast flux IP addresses are spread across IP address space. They also note that fast flux networks tend to use a different portion of the IP address space to that used by legitimate sites. They have also noticed that fast flux infected hosts are typically widely geographically distributed and this widespread geographical dispersion can also be used for detecting fast flux hosts. They suggest that some sort of detection process could be generated which looks for patterns such as these but propose this as future work.
Similarly, a paper entitled “FluXOR: detecting and monitoring fast-flux service networks” by Emanuele Passerini, Roberto Paleari, Lorenzo Martignoni and Danilo Bruschi from the University of Milano and published in DIMVA 2008, LNCS 5137, pp. 186-206, 2008, describes a mechanism which attempts to identify web sites which are hiding behind a fast flux service network. Their detection mechanism involves querying various DNS servers for the URL of a suspected website, as though it were a normal user device (i.e. a “victim” device if the website is malicious) on a frequent basis for a fairly large number of times. If the website being tested is being serviced by a fast flux service network, then each query is likely to result in many different (fast flux agents') IP addresses being provided in the A record. Their detection mechanism is based on a naïve Bayesian classifier which processes several different parameters some of which are obtained from the results of the DNS queries and some of which relate to information about the registered domain name owner details for the website being tested (presumably obtained from the WHOIS service).
A well known security provision for use with computer networks generally is the firewall. A firewall is generally established at the entrance to a private network or in respect of a single device and acts to prevent certain types of incoming traffic from passing through the firewall (and also in some cases acts to prevent certain outgoing traffic from passing through the firewall). In general, a well configured firewall should be able to prevent devices from operating as fast flux proxy servers (and indeed should prevent devices from becoming infected with malicious fast flux proxy server code in the first place); however, firewalls may become corrupted by malicious software or an individual device may not have an appropriate firewall installed or switched on (or may have it improperly configured) and so a firewall approach is not perfect. Furthermore, some users may prefer to avoid using firewalls since they can interfere with tasks which the user is trying to perform intentionally; although in general a private firewall can be configured to always correctly allow a user's desired traffic to pass safely through this can be difficult to achieve correctly and many users may simply prefer to turn the firewall off. Finally there may be some situations where it is impractical or objectionable to use a standard firewall (e.g. at a point in a public access network such as at a DSLAM or BRAS, etc.).
Xin Hu, Matthew Knysz, Kang G. Shin: “RB-Seeker: Auto-detection of Redirection Botnets” 11 Jul. 2009 (2009-07-11), pages 1-17, XP002587668 describes a system for detecting domains (i.e. Internet domains e.g. ADomain.com as part of a Universal resource Locator such as www.ADomain.com) which are hidden behind a redirection “botnet”; in particular, the system aims to be able to distinguish malicious redirection botnets from legitimate systems such as legitimate Content Distribution Networks which behave in a similar manner to redirection botnets. The system includes a component which examines NetFlow data (taken from a core router of a large university) and probabilistically identifies redirection behaviour based on transport-layer information available in NetFlow records (packet contents are not available, making it impossible to examine packet payloads and detect redirection behaviour via HTTP status code or refresh headers. It also includes a component which attracts spam emails (a spam honey pot) and then follows any suspicious (not whitelisted) URLs to try to identify redirection behaviour when following the URLs. Finally the system includes a component which attempts to distinguish between the suspected botnet redirection cases and legitimate redirection systems such as CDN's as mentioned above.