The current state of sampled network monitoring solutions remains basic, providing limited information to service providers. Many network monitoring applications in use today are only interested in TCP connections that become fully established, so other connection attempts, such as port scanning attempts, simply waste resources if not filtered.
A need has arisen for both the users and network operators to have better mechanisms to monitor network performance, filter network traffic, and troubleshoot network congestion without introducing additional traffic on any communication network. This is especially relevant to Internet providers that must comply with SLAs (Service Level Agreements) provided to customers. As Internet architecture evolves, the SLAs now include requirements on the quality of service such as jitter, throughput, one-way packet delay, and packet loss ratio. Additionally, the need to monitor network traffic is prevalent for the underlying Internet protocol enabling the World Wide Web.
Detailed visibility into individual users and business applications using the global network is essential for optimizing performance and delivering network services to business users. Current network performance monitoring mechanisms perform traffic analysis in a non-invasive way with respect to the observed networking environment. As a result, these mechanisms do not affect the performance of the network while doing the measurements and querying.
For example, Cisco offers the NetFlow traffic analyzer identifies traffic flows based on IP source/destination addresses, protocol ID field, type of service field, and router port. Once identified, statistics can be collected for a traffic flow, and exported via user datagram protocol (UDP) when the flow expires. A NetFlow record contains information about sampled flows that pass through the router and provides a digest of the communications showing hosts that were involved, services that were used, and how much data was exchanged. As another example, Lucent Bell Labs has various research projects in traffic analysis, which are mainly concentrated on collection of TCP/UDP/IP packet headers and off-line traffic analysis, modeling and visualization.
In general, network monitoring tools are able to collect a large amount of data from various information sources distributed throughout the network. For example, Snort Intrusion System for TCP (SIFT), uses an information dissemination server which accepts long-term user queries, collects new documents from information sources, matches the documents against the queries, and continuously updates the users with relevant information. SIFT is able to process over 40,000 worldwide subscriptions and over 80,000 daily documents.
Automated tools for filtering the large amount of information that may be collected are also available. For example, information filtering systems (IFS) require users to provide their profile representing his/her information needs and the system then filters the information relevant to that profile. Detailed visibility into individual users and business applications using the global network is essential for delivering network services tailored to business or individual users. By filtering useful and personalized information, these tools aim at optimizing the daily work of its users.
Also, tracking and monitoring flows is particularly relevant for network vendors who wish to provide access to information on their high-end routers; they must therefore devise scalable and efficient algorithms to deal with the limited per-packet processing time available.
These tools are also useful to network providers, as it allows them to filter information relevant to implementing cost saving measures by optimizing network resources utilization, detecting hi cost network traffic, or-tracking down anomalous activity in a network, etc. For example, in order to protect their network and systems today, network providers deploy a layered defense model, which includes firewalls, anti-virus systems, access management and IDS . The capacity to detect as fast as possible the propagation of malware and to react efficiently to on-going attacks inside the network in order to protect the network infrastructure is becoming a real challenge for network operators. These systems are efficient once they detect correctly the illegitimate traffic, based on flow analysis or/and on deep packet analysis. Flow-based analysis includes methods for tracking malicious continuous flows by detecting unusual patterns. It relies usually on technologies as Netflow, IPFix, and RTFM implemented into routers.
Many intrusion detecting systems (IDS) and network security monitoring (NSM) systems are interested in TCP connections that become fully established. Therefore, other connection attempts, such as port scanning attempts, simply waste resources if not filtered. Also, most current IDS and NSM systems operate based on restricting clients to a specified number of connections in a certain amount of time seconds which may result in false-positive detections for active users. For example, SNORT is a lightweight network intrusion detection system (IDS) that uses a flexible rules language to describe traffic that it should collect or pass, and a detection engine with a modular plug-in architecture. SNORT is capable of performing real-time traffic analysis and packet logging on IP networks, and detecting a variety of attacks and probes, such as buffer overflows, stealth port scans, OS fingerprinting attempts, and more.
SIFT is a hardware based IDS, which selectively forwards IP packets that contain questionable headers or defined signatures to a PC, where complete rule processing is performed, thus alleviating the need for most network traffic from being inspected by software. Statistics, like how many packets match rules, are used to optimize rule processing systems.
Another method for scanning the ports on a network element for intrusion detection, rather than scanning established connection is described in the paper “Very Fast Containment of Scanning Worms” by Nicholas Weaver, et al. The system described in this article uses an associative cache to track “external connections”, and requires a notion of “internal” and “external” IP addresses, which would result in inefficient operation on edge or core routers.
V. Paxson describes a system for monitoring network traffic in a paper published in Computer Networks, 31(23-24), pp. 2435-2463, 14 Dec. 1999, entitled “Bro: A System for Detecting Network Intruders in Real-Time”. Bro uses an “event engine” that reduces a kernel filtered network traffic stream into a series of high-level events, and a “policy script interpreter” that interprets event handlers written in a specialized language used to express site's security policy. Event handlers can update state information, synthesize new events, record information to disk, and generate real-time notifications. Again, the Bro system focuses on detecting port scans, not detecting established connections, and as such is not accurate enough in filtering the malicious traffic. While it does track the number of failure attempts, the Bro system is also limited to lower traffic rates (1 Gbps).
The existing flow traffic monitoring tools are not able to trace the flow establishment and duration with accuracy. Traffic flow monitoring or filtering systems that enable identification of established connections and measurement of flow duration with a high degree of accuracy are very important to network operators/providers, especially in a resource-constrained environment. There is a need to provide such connection detection systems that operate with high accuracy and provide instant feedback while operating in high-speed routers at line speed.