Computers are commonly arranged into networks to allow intercommunication. Depending upon the interconnection technology, these networks can span great distances. Moreover, the networks can employ one or more communication protocols, devices may be added or removed from the network arbitrarily, communication paths may change, and the nature and characteristics of the network traffic is variable. As such, the management of these systems has become increasingly complex.
Among other things, network management typically entails determining which sections of a computer network are over- or under-utilized. In addition, it includes detecting and locating network faults so that repairs and/or re-routing of the network can be made, if necessary.
In order to perform these network management functions, network management tools have been developed to assist the MIS expert in managing the network. Network management tools employ software applications that allow the MIS expert to diagnose the network and thereby minimize maintenance cost by efficiently utilizing the MIS expert's time. Sometimes, these tools include dedicated hardware.
Some older network management tools performed these tasks by periodically polling certain devices connected to the network. A device connected to the network is commonly called a network node, or just node. This approach is sometimes referred to as the "ping" approach. Though this approach could determine connectivity and operability of the nodes, it could not effectively provide the full range of functionality that network professionals desired. For example, it was impractical for these systems to gather utilization statistics, and connectivity analysis is only a small portion of the task of isolating network problems.
Later, protocol analyzers were developed, which provided the capability of collecting basic statistics and filtering for and decoding of specific packets. It will be appreciated that the structure of a packet will depend upon the underlying protocol. However, protocol analyzers require a technician to take equipment to the problem, once it is discovered.
Still later, network management tools used dedicated nodes to monitor all of the network traffic passing the node. These dedicated nodes used embedded systems, i.e., microprocessors and memory storing task-specific software, to monitor network traffic passing the node. By judiciously placing these nodes into different network segments, the tools could gather information about all of the network traffic. These systems provide more functionality, but have stringent performance requirements, as they require the node to handle all network traffic passing the node.
Standards, such as SNMP, were eventually developed to facilitate development among different network management software suppliers (SNMP: Standard Network Management Protocol). SNMP outlines what statistical data should be gathered and available to the network. Software developers can then write their applications, knowing that any system which complies with the standard will provide this data. These type of standards, among other things, facilitate the development of distributed network management.
Within SNMP, RMON MIB was developed as a standard management information base ("MIB"). It will be appreciated that a MIB is a collection of data and that a MIB is in no way limited to the RMON specification. For an Ethernet segment, RMON requires nine groups of data: Statistics Group; History Group; Host Group; Host TopN Group; Matrix Group; Alarm Group; Filter Group; Packet Capture Group and Event Group. This set of data is then available to the network management platform or other application. For token rings, RMON requires ten groups, the nine outlined above plus a token ring group.
SNMP effectively defines the network as a world full of objects, each having a type and value. SNMP does not instruct how this data ought to be collected and compiled. SNMP will not be further discussed herein, as it is known in the art.
Basically, this newer architecture has a plurality of dedicated nodes placed throughout the network. Each of these dedicated nodes is constantly gathering information about its corresponding network segment. This process of gathering information is called "per-packet updating." In such a process, the dedicated node receives each packet that passes the node and analyzes the packet in order to update corresponding data structures, typically tables. For example, if SNMP is used on an Ethernet and if SNMP is using a RMON-like MIB, the per-packet update would need to update the data structures corresponding to the nine groups.
At some point, the network management tool will likely request this data from the dedicated nodes. For example, it may request this data to display it in graphical form, or it may request this so that it can perform some further data analysis. In any event, SNMP requires that the data be available for external query. The network management tool communicates with the dedicated node, for example by SNMP, by requesting the particular data.
This newer architecture of using dedicated nodes and communication protocols implicitly demands that the dedicated nodes have a very high network performance. Typically the dedicated node needs a network performance an order of magnitude higher than a "normal" node. This is so because the dedicated nodes must be able to process aggregate network traffic, whereas a normal node typically needs to handle bandwidths associated with typical network communications. If a normal node gets congested, it can typically rely upon standard retry algorithms, known in the art, to ensure that it will eventually get the packets intended for it. On the other hand, if the dedicated node gets congested, any missed packet would go unproccessed because when the dedicated node is operating passively and promiscuously it should not request a retry. It will be appreciated that the underlying network protocol defines the process for rejecting and retransmitting packets.
In order to meet these stringent performance requirements, the prior art used dedicated nodes having embedded systems, which relied upon in-line programming for the microprocessors. It was thought that in-line programming was necessary to meet the stringent performance requirements. Essentially, in-line programming is a single stream of code (i.e., program instructions) with no jumps, procedure calls, function calls, or other breaks in the control of the code. As such, in-line code quickly becomes cumbersome, complex, and difficult for a software engineer to understand.
In the context of network management tools, multiple tables of information in a memory unit usually need to be managed. Often this management contains a certain set of common operations associated with each table and a certain set of unique operations associated with each table. With in-line programming, however, even the common operations must be reiterated at each instance of the management of each table. If a programming bug is present within the common operations, this programming bug will be present at each instance, i.e., for each table. In contrast, if a procedure or function is used for the common operations, the bug will have only one instance.
Moreover, it is highly likely that the code will need to undergo subsequent revisions. In-line programming requires that the programmer decipher the in-line code to discern the exact code fragments that need updating. Similarly to that described above, this update will need to be done at each instance, in contrast to a single instance if procedures and functions are used. This is especially complex in prior art systems as in-line programming is non-modular, and the programmer will have to be wary of protocol sensitive code.
Dynamic module loading, which has been used in other contexts, such as Microsoft Windows' Dynamic Linked Libraries, may be very useful for a dedicated node. For example, such dynamic loading can be helpful for implementing a sophisticated version and release control, in which different network segments can have different combinations of the software. For example, one segment of the network may need a particular application, such as a proxy manager or an accounting package, that the other network segments do not need. Dynamic loading allows the different combinations of software residing on the various segments to be tailored to the system's needs. Dynamic loading also benefits the software distributor, as it will not need to re-link the various software combinations in order to release them (Linking and re-linking software are known in the art). In addition, such loading is also helpful for implementing transient diagnostics. Dynamic loading is extremely difficult to implement with an in-line program technique because the new code will need to be concatenated with the existing code and any new code will need to employ some form of memory address independence (e.g., self-relative addressing) to ensure operability. Though such a process is theoretically possible, the high level of difficulty for doing such a task will be appreciated by those skilled in the art.
All in all, the prior art systems with their use of in-line programming greatly add to development costs and times by requiring more sophisticated programmers to understand the complex, non-modular code.
In connection with this, one can expect that new network management standards will be developed in the future. It is likely that these standards will require more sophisticated statistics and the like. The prior art approach requires that a new in-line program be developed for each new standard. This greatly adds to the development cost and greatly increases the implementation time.