A data storage system can include several independent processors that cooperate to increase throughput of many tasks associated with data storage and retrieval. These processors typically communicate with each other by leaving messages in a shared memory. This shared memory is constantly available to the processors for reading and writing.
Certain tasks require that a processor use a particular system resource to the exclusion of other processors. When a processor is using such a system resource, it is useful to communicate to other processors that the system resource is busy. Conversely, when a processor is done using such a system resource, it is useful to communicate to other processors that the resource is now free.
One approach to providing such communication is to leave messages in the shared memory. However, because the memory is shared, it is possible for a race condition between processors to occur. In such cases, a processor may inadvertently overwrite a message. This can result in two processors attempting to use the same resource at the same time.
To avoid the possibility of having two processors overwrite each other's messages, it is possible to partition the shared memory and to allow a processor write-access to only its partition. However, as the number of processors increases, the amount of memory allocated to each processor decreases. This can result in a processor running short of memory even though considerable memory, which might be available were it not allocated to other processors, stands idle.
Another approach is to maintain a queue for each such resource. However, management of a queue consumes system overhead. It is therefore preferable to avoid such a solution whenever possible.