Many conventional devices include a built in computer whose sole purpose is to operate a single application program from a single 10 dedicated store of information, for example a so called palm-top address register. A programmer enjoys considerable freedom to develop the operating system and application program for such a device because the number of uncertainties in the operating environment are few and allowances for each can be made and tested. The performance of the software as a whole can be optimized.
The continuing trend toward faster computers has allowed system designers to accomplish modest gains in the complexity of application programs while making enormous strides in performing background overhead tasks of which the user may be completely unaware. Such background tasks are no longer mere time-shared application programs but are members of so called services. A service is a set of member programs written for cooperation on a network of computers. Each computer is a node of the network. Each member generally operates on a physically separate node and communicates with other members of the set to accomplish a distributed processing objective. Examples of such objectives include sharing information, sharing access to a peripheral input or output device, and sharing other services.
The basic unit of information storage is the computer file and, for executable information, the program object. To share information, a conventional service at a minimum employs a naming convention for files, accesses a file by its name, and makes a copy of the contents of the file onto a destination node for efficient use (reading/writing) by the member at a requesting node. While a computer program is a file or group of files, a computer program in operation is better understood as a process that sends and receives messages. Sharing a service is conventionally accomplished either by copying the requisite program files to the requesting node, or by employing a naming convention (e.g. an application program interface) and communicating messages (i.e. passing parameters and results) between members that make up a distributed service. Such a program (a member of a set) is no longer thought of as a process but as a collection of objects, each object being a communicating process.
Traditional computer systems have managed information storage as a list of named files on a named physical device. Conventional networks of computers provide the service user with an independence from having to know the name of a physical device to obtain access to a named file. Still other conventional systems allow the service user an independence from knowing even the name of the file, as for example, when using a browser service to study a topic on the World Wide Web. In a similar development, conventional object resource brokering permits objects to cooperate without foreknowledge of names and physical locations. The market continues to demand this flexibility that the user of such distributed data storage systems and distributed processing systems enjoys.
Unfortunately, computer networks on which file copies and messages are communicated are subject to frequent change due to expansion (e.g., equipment upgrades), reconfiguration, and temporary interference (e.g. line outages). Further, methods and structures for sharing information and sharing services are subject to change as well. These sources of unreliability of the network, and potential incompatibility in methods and structures, greatly multiply the difficulty in developing reliable software for hosting old services on new computers and network devices, and for hosting new services. Due to the rapid development of ever more capable communication hardware and systems software, few application program developers care to add network tests, network performance monitoring, or network fault analysis to the applications they write, but rather build the application or the new service on the basis that the network is being managed by some other software. This assumption appears almost ludicrous in light of the unpredictable intensity and combination of tasks a computer network is expected to perform as more users rely on sharing for their business objectives.
Conventional network management application programs cannot keep pace with the development of hardware and systems software. Methods used for conventional network management occupy a significant portion of the network's messaging capability by transferring information to other computers of the network for analysis. Users are, therefore, increasingly exposed to risks of loss of data, waste of personally invested time and materials, sudden disruption of distributed processing, and occasionally being unable to provide to others what was expected based on past experience with the same computer system.
There remains a need for network monitoring services better matched to the computer system being monitored and for computer systems having improved monitoring services for reducing the risk of interference that occasionally prevents reliable access to the data and operation of services which the computer system was intended to provide.