Real-time, multimedia, applications are becoming increasingly important. These applications require extremely fast processing speeds, such as many thousands of megabits of data per second. While single processing units are capable of fast processing speeds, they cannot generally match the processing speeds of multi-processor architectures. Indeed, in multi-processor systems, a plurality of processors can operate in parallel (or at least in concert) to achieve desired processing results.
The types of computers and computing devices that may employ multi-processing techniques are extensive. In addition to personal computers (PCs) and servers, these computing devices include cellular telephones, mobile computers, personal digital assistants (PDAs), set top boxes, digital televisions and many others.
A design concern in a multi-processor system is how to manage the use of a shared memory among a plurality of processing units. Indeed, synchronization of the processing result, which may require multi-extension operations. For example, proper synchronization may be achieved utilizing so-called atomic read sequences, atomic modify sequences, and/or atomic write sequences.
A further concern in such multi-processor systems is managing the heat created by the plurality of processors, particularly when they are utilized in a small package, such as a hand-held device or the like. While mechanical heat management techniques may be employed, they are not entirely satisfactory because they add recurring material and labor costs to the final product. Mechanical heat management techniques also might not provide sufficient cooling.
Another concern in multi-processor systems is the efficient use of available battery power, particularly when multiple processors are used in portable devices, such as lap-top computers, hand held devices and the like. Indeed, the more processors that are employed in a given system, the more power will be drawn from the power source. Generally, the amount of power drawn by a given processor is a function of the number of instructions being executed by the processor and the clock frequency at which the processor operates.
In conventional multiprocessor systems, threads run on different processors. Generally, a thread can be described as either a program counter, a register file, and a stack frame. Taken together, these elements are typically referred to as a thread's “context”. These threads are useful for instance, in game programming, and can be used for many different tasks in concurrent processing.
However, there is a problem with conventional use of threads in a multiprocessor system. This problem concerns the notification of a first thread on a processor that an outside event or another thread has information that is of interest to the first thread.
This problem has several aspects. First, in conventional technologies, software is used to notify the thread that an event of interest has occurred. This can mean that both the software application and the operating system are involved, which increases latency of notification for a particular processor. Low latency solutions, when available, typically involve active polling for the event by certain threads, causing increased system load and power consumption. Finally, if two or more threads have to be notified of an event substantially concurrently, such that the threads can process the event in concordance with one another, then significant overhead can be involved in event delivery and thread scheduling. This added complexity can negatively impact response time of the threads in a real-time system.
Therefore, there is a need for a system and/or method for notifying threads of an event of interest on disparate processors of a multiprocessor system that addresses at least some of the concerns of conventional notifications of threads on separate processors.