Data processing systems typically comprise a central processing unit, having one or more processors, for managing the movement of information, or data, between different peripheral units, or devices. These multi-processor systems often segregate system tasks among the processors to improve the performance, and the efficiency, of the overall data processing system. Although the system tasks may be divided among the processors, the data processing system may also include a shared resource, such as a shared memory. The management of this shared resource requires communication between the processors. A typical practice provides a first processor for supplying the resource, such as memory buffers partitioned within the shared memory, and a second processor for consuming the resource, such as allocating the memory buffers to specific tasks within the system.
In such multi-processor systems with shared resources, the first processor must notify the second processor when resources become available. Likewise, the second processor must notify the first processor when resources are used, or allocated, and additional resources are needed, or required. If the resources are critical to the performance, or operation, of the data processing system, each processor must notify the other in a minimal time period.
Current multi-processing systems, as disclosed in prior art, attempt to minimize the inter-processor communication delay by using a shared memory. In this technique, the processors use a section of the shared memory to store communication messages and control information. This section of the memory is dedicated as a communication mechanism for the processors, having data elements reserved for communication protocols, communication messages, and the status of particular events within the system. A special data element is often designated as a communication lock, to prevent a second processor from changing any of the data elements within the dedicated section of the shared memory while a first processor is modifying one or more of the data elements. The processors periodically poll the data elements within the dedicated section of the shared memory to determine whether any communications have been sent to them, or whether certain events have occurred in the system warranting their attention.
The shared memory approach to inter-processor communications contains certain disadvantages. The first processor, or the resource producer, is required to periodically poll the dedicated section of the shared memory even when the system needs no resources produced. Likewise, the second processor, or resource consumer, is required to poll the dedicated section of shared memory even when no resources are available for allocation, or consumption. This unwarranted polling of the communication section of the shared memory by the processors wastes time, introduces inefficiencies in the system, and reduces the overall performance of the multi-processing system.
As an alternative, current multi-processing systems may also use mailbox message techniques to provide communications between the processors. In this technique, each processor contains an associated mailbox, a memory attached to the processor and dedicated to receiving and queuing messages sent to the processor. Each processor includes a portion of its control program, a subprogram or subroutine, for managing its mailbox, often times referred to as a message handler. The message handler typically inspects the mailbox for incoming messages from other processors, and may also send outgoing messages to other processors. The message handler in each processor periodically executes, as scheduled within processor's control program, to manage the incoming and outgoing messages.
The mailbox typically stores the messages in the sequence in which they were received, similar to a first in, first out (FIFO) queue. The message handler services the message in the mailbox by transferring control to the portion of the processor's control program designed to respond to the message, or an event associated with the message. Some message handlers may service the messages in the order in which the messages reside in the mailbox. However, the message handler need not address the messages in that particular sequence. More sophisticated message handlers may use different rules to assign a priority to each message, and service the messages in a different sequence from which they were received in the mailbox.
One ordinarily skilled in the art can understand that the current mailbox message technique for inter-processor communications also includes some disadvantages. The message handler introduces significant overheads, measured in control instructions and processing cycles, when it periodically inspects the mailbox, and prioritizes the sequence in which messages are received. These processing overheads correspond to time delays, and performance inefficiencies, in the multi-processor system. In addition, messages unrelated to sharing resources may be processed prior to messages pertaining to the shared resources, thereby creating an additional time delay, and additional performance reduction, in the multi-processor system.
Accordingly, an improved system and method are needed for allowing multiple processors to manage shared resources within a data processing system.