Multi-threaded programming provides the ability to divide program tasks into multiple threads which then can execute in parallel on a single processor through time slicing or execute in parallel on separate processors in a multi-processor computer system. In this way, program tasks can be executed quickly to accommodate large volumes of processing requests.
A problem can arise in multi-threaded computer environments when one or more threads of execution attempt concurrent operations to the same critical code section. In other words, the threads attempt simultaneous access to a shared data record which needs to be atomically updated so the data is left in a consistent state. Such an inopportune interleaving is commonly known as a race condition and will cause a program to fail or produce unexpected results. That is, the outcome of a task may show an unexpected critical dependence on the relative timing of when one thread accesses a critical code section over another thread which attempts to access the same critical code section. Synchronization methods are procedures which prevent or recover from such conditions. In general, synchronization can be accomplished using either a locking mechanism or a lock-free mechanism when providing thread access to critical code sections.
Locking synchronization methods by convention adhere to a locking model which permits only one thread at a time to acquire a lock and subsequently conduct processing operations within a critical code section. In the Java™ programming language, locking synchronization is accomplished with monitors to provide mutual exclusion to critical code sections. When a thread holds a lock to a critical code section, other threads attempting to access the critical code section will be paused and then placed in a waiting queue. After a thread holding a lock on an object completes processing operations involving the critical code section, the thread will release the lock. Another thread at the head of the waiting queue will be activated from the paused state, given the lock, and allowed to proceed with processing operations.
Lock-free synchronization as the name suggests does not adhere to a locking model and instead relies on an atomic update operation that guarantees that only one thread can ever make an update to a critical code section. If a second thread attempts to update a critical code section while a first thread is updating critical code, the first thread will succeed in its update operations while the second thread's attempt to update the critical code will fail. The second thread will restart its update attempt after the first thread completes update operations to the critical code section.
Because locking synchronization pauses threads and places them in a waiting queue for a lock to be released, processing throughput can suffer. Furthermore, if a thread holding a lock fails to complete processing, the program will fail to make progress and will become unresponsive. On the other hand, while lock-free synchronization methods often produce greater execution performance, lock-free synchronization methods are not always possible to implement for all critical codes sections. Ideally, program code should allow both methods to interoperate, that is, opportunistically attempt to use lock-free synchronization whenever possible but revert to locking synchronization as needed. Therefore, there is a need for a method which allows locking and lock-free synchronization methods to interoperate within a program executing in a multi-threaded computing environment.