1. Field of the Invention
The present invention relates to computing systems and, more particularly, to data communications for computing systems.
2. Description of the Related Art
Recent developments in data communication for computing systems have software modules in a layered model. One feature of UNIX System V uses a layered model of software modules, referred to herein as the STREAMS model. The STREAMS model provides a standard way of dynamically building and passing messages through software modules that are placed in layers in a protocol stack. In the STREAMS programming model, the protocol stack can be dynamically changed, e.g., software modules can be added or removed (pushed and popped) at run-time. Broadly speaking, a xe2x80x9cstreamxe2x80x9d generally refers to an instance of a full-duplex path using the model and data communication facilities between a process in user space and a driver which is typically located within the kernel space of the computing system. In other words, a stream can be described as a data path that passes data in both directions between a stream driver in the kernel space and a process in user space.
FIG. 1 illustrates a STREAMS programming environment 100 including a stream head 102, stream modules 104 and 106, and a streams driver 108. An application program can create a xe2x80x9cstreamxe2x80x9d by opening a device 110. Initially, the stream includes only the stream head 102 and stream driver 108. The stream head 102 is the end of the stream closet to the user process and serves as a user interface between the stream and the user process. Similarly, the stream driver 108 is the end of the stream closest to the device 110 and transfers data between the kernel and the device 110.
After the stream head 104 and stream driver 108 are provided, one or more stream modules, such as stream modules 104 and 106, can be pushed on the stream between the stream head 102 and stream driver 108. An application can dynamically add or remove (push or pop) stream modules on the stream stack at run-time. Each of the stream modules 104 and 106 includes a defined set of kernel-level routines and data structures to process data that passes through these modules. For example, stream modules can operate to convert lower case to upper case, or to add network routing information, etc.
As depicted in FIG. 1, each of the stream head 102, the stream modules 104 and 106, and the stream driver 108 has a pair of queues that can be used as containers for holding messages (e.g., data) traveling upstream or downstream over the stream. For example, a down-queue 112 and a up-queue 114 hold blocks of messages (data) for the stream module 104 and respectively pass the data up the stream to the stream head 102 or down the stream to the stream driver 108. The messages can be ordered within the queues 112 and 114 in first-in first-out basis (FIFO), perhaps according to assigned priorities.
It should be noted that in some situations messages cannot be placed in queues 112 and 114. For example, when a stream queue has reached it allotted size, messages can no longer be placed in that queue. As another example, messages cannot be placed in the queues 112 and 114 when other processing threads have acquired their software locks. In such cases, messages are stored in another queue that can serve as a back-up queue, herein referred to as a xe2x80x9csynchronization queuexe2x80x9d. For example, synchronization queues 116 and 118 depicted in FIG. 1, respectively hold messages that could not be placed into the queues 112 and 114. It should be noted that in the STREAMS model, some stream modules (e.g., Internet Protocol (IP) module) are confined to having only one synchronization queue. As a result, for some stream modules there is only one synchronization queue available to serve as a backup queue. For example, a synchronization queue 120 which is used to hold the messages for the second software module 106.
One problem with conventional implementations of the STREAMS model is that messages are intermixed in one synchronization queue regardless of their type. As will be appreciated by those skilled in the art, some of the messages held in synchronization queue 116 and 118 contain data pertaining to operational events (events) that may effect the flow of data and/or how data is to be processed. For example, one such operational event may be related to changing the path of data flow to facilitate re-routing messages through a different physical router. Typically, data pertaining to operational events needs to be processed before other data in the synchronized queue can be processed. However, since data pertaining to operational events is intermixed with data not pertaining to any operational events, the conventional models do not provide an efficient mechanism for identifying and processing data pertaining to events.
Another problem with conventional implementations of the STREAMS model is that there is no mechanism for arranging or prioritizing data held in a synchronized queue. All messages are maintained in one synchronized queue regardless of their relative importance. As a result, messages with less importance may be processed by a high priority processing thread. Thus, the conventional models do not provide an effective mechanism to process data held in synchronized queues in accordance to the relative importance of the data.
In view of the foregoing, there is a need for improved methods for managing data propagation between software modules.
Broadly speaking, the invention relates to techniques for managing propagation of data through software modules used by computer systems. More particularly, the invention obtains improved propagation of data (namely, messages to and from synchronization queues which back up main queues associated with the software modules). In one aspect, the invention provides a segregated synchronization queue which allows segregation of data pertaining to events from data that does not pertain to events. In accordance with another aspect, data can be organized within a synchronization queue and processed in accordance with priorities. The invention is particularly well suited for use with the STREAMS model that uses software models arranged in a stack to provide data communications.
The invention can be implemented in numerous ways, including a system, an apparatus, a method or a computer readable medium. Several embodiments of the invention are discussed below.
As a synchronization queue for a computer system one embodiment of the invention includes: a first synchronization queue container suitable for storing one or more message blocks while the one or more message blocks are awaiting to be processed, and the one or more first message blocks of the first synchronization queue container being arranged in accordance with a first desired order; a second synchronization queue container suitable for storing one or more message blocks while the one or more message blocks are awaiting to be processed, and the one or more second message blocks of the second synchronization queue container being arranged in accordance with a second desired order; and a synchronization queue header providing reference to at least the first and second synchronization queue containers.
As a method for managing flow of messages between a first layer software module and a second layer software module with the first and second layer software modules being arranged in a layered stack, one embodiment of the invention includes the acts of: determining whether a message pertains to an operational event; placing the message in an event queue associated with the first layer software module when it is determined that the message pertains to an operational event; and placing the message in an appropriate data queue associated with the first layer software module when it is determined that the message does not pertain to an operational event.
As a computer readable media including computer program code for managing flow of messages between a first software module and a second software module, one embodiment of the invention includes: computer program code for determining whether a message pertains to an operational event; computer program code for placing the message in an event queue associated with the first software module when it is determined that the message pertains to an operational event; and computer program code for placing the message in an appropriate data queue associated with the first software module when said computer program code for determining has determined that the message does not pertain to an operational event.
The advantages of the invention are numerous. Different embodiments or implementations may have one or more of the following advantages. One advantage of the invention is that, within synchronization queues, data pertaining to events can be segregated from data that does not pertain to events. Another advantage of the invention is that data within synchronization queues can be organized and/or processed in accordance with desired priorities. Yet another advantage is that more efficient organization and propagation of data can be achieved.