The development of the EDVAC computer system of 1948 is often cited as the beginning of the computer era. Since that time, computer systems have evolved into extremely complicated devices. To be sure, today's computers are more sophisticated than early systems such as the EDVAC. Fundamentally speaking, though, the most basic requirements levied upon computer systems have not changed. Now, as in the past, a computer system's job is to access, manipulate, and store information. This fact is true regardless of the type or vintage of computer system. Accordingly, computer system designers are constantly striving to improve the way in which a computer system deals with information.
Computer systems manipulate information by following a detailed set of instructions, commonly called a “program” or “software.” Software development has traditionally been a time-consuming task. The field of software engineering has attempted to overcome the limitations of traditional techniques by proposing new, more efficient software development models. One such technique is called “object-oriented” programming. Programs created using this technique utilize self-contained items, known as “objects,” which generally contain some information (“data”) and a set of operations (“methods”) capable of manipulating that data. These objects interact with each other by sending sets of instructions, called “messages.”
In many object-oriented programs, some of the objects act as providers of services or functionality, whereas other objects act as consumers of services or functionalities. The providers of information or functionality are commonly known as “servers” or “subjects.” The consumers of the information or functionality are called “clients” or “observers.”
In conventional subject-observer systems, each subject maintained a list of observers and, when the subject's state changed, notified each observer of its state change. This notification occurred regardless of the observer's particular interest or the observer's capacity to handle the update. The observers would then request the updated information, again regardless of the observer's particular interest or the observer's capacity to handle the update. The subject's updates are then issued, only to be discarded by that observer. This drawback made conventional designs inflexible and inefficient.
This drawback is further magnified in modern “distributed” systems. These systems are made up of several independent computers connected by a communication device, such as a network or system bus, that work together to execute a program. Each computer in the system is capable of sending messages to the other computer, which allows objects existing on different computers to work together. Although this design allows the distributed system to perform tasks in parallel, its natural advantages are not fully utilized in conventional systems because the “remote” messages are comparatively slow. That is, when objects reside on different computer systems, the distributed system manager must send messages between those systems. These inter-system messages are sent at a much slower rate than intra-system messages. This drawback can make it computationally expensive to maintain data consistency across the distributed system.
An additional drawback with conventional subject/observer systems is that the subject object controls the message transmission rate. Frequently, an observer object running on a heavily burdened system may not be able handle updates from the subject object at this rate. This drawback can cause a bottleneck at one processor, which can cascade to other processors and cause them to cause them to become backed-up as well.
Yet another drawback of conventional design is that each subject frequently needs to simultaneously maintain several different types of relationships, and therefore to exchange different data for each type of relationship. In an effort to support these different relationships, conventional methods forced the subject object to support multiple attach/detach interfaces and to maintain multiple observer lists. This approach, however, was not extendable and frequently caused “code bloat.”
Without a system that can optimize the use of system resources by minimizing remote calls and balancing workloads, data processing systems will never fully realize the benefits of distributed computing.