1. Field of the Invention
The present invention relates to computer systems and methods in which data resources are shared among concurrent data consumers while preserving data integrity and consistency relative to each consumer. More particularly, the invention concerns an implementation of a mutual exclusion mechanism known as “read-copy update.” Still more particularly, the invention is directed to a technique for increasing the speed of read-copy update grace period detection.
2. Description of the Prior Art
By way of background, read-copy update is a mutual exclusion technique that permits shared data to be accessed for reading without the use of locks, writes to shared memory, memory barriers, atomic instructions, or other computationally expensive synchronization mechanisms, while still permitting the data to be updated (modify, delete, insert, etc.) concurrently. The technique is well suited to multiprocessor computing environments in which the number of read operations (readers) accessing a shared data set is large in comparison to the number of update operations (updaters), and wherein the overhead cost of employing other mutual exclusion techniques (such as locks) for each read operation would be high. By way of example, a network routing table that is updated at most once every few minutes but searched many thousands of times per second is a case where read-side lock acquisition would be quite burdensome.
The read-copy update technique implements data updates in two phases. In the first (initial update) phase, the actual data update is carried out in a manner that temporarily preserves two views of the data being updated. One view is the old (pre-update) data state that is maintained for the benefit of read operations that may have been referencing the data concurrently with the update. The other view is the new (post-update) data state that is available for the benefit of other read operations that access the data following the update. These other read operations will never see the stale data and so the updater does not need to be concerned with them. However, the updater does need to avoid prematurely removing the stale data being referenced by the first group of read operations. Thus, in the second (deferred update) phase, the old data state is only removed following a “grace period” that is long enough to ensure that the first group of read operations will no longer maintain references to the pre-update data.
FIGS. 1A-1D illustrate the use of read-copy update to modify a data element B in a group of data elements A, B and C. The data elements A, B, and C are arranged in a singly-linked list that is traversed in acyclic fashion, with each element containing a pointer to a next element in the list (or a NULL pointer for the last element) in addition to storing some item of data. A global pointer (not shown) is assumed to point to data element A, the first member of the list. Persons skilled in the art will appreciate that the data elements A, B and C can be implemented using any of a variety of conventional programming constructs, including but not limited to, data structures defined by C-language “struct” variables.
It is assumed that the data element list of FIGS. 1A-1D is traversed (without locking) by multiple concurrent readers and occasionally updated by updaters that delete, insert or modify data elements in the list. In FIG. 1A, the data element B is being referenced by a reader r1, as shown by the vertical arrow below the data element. In FIG. 1B, an updater u1 wishes to update the linked list by modifying data element B. Instead of simply updating this data element without regard to the fact that r1 is referencing it (which might crash r1), u1 preserves B while generating an updated version thereof (shown in FIG. 1C as data element B′) and inserting it into the linked list. This is done by u1 acquiring an appropriate lock, allocating new memory for B′, copying the contents of B to B′, modifying B′ as needed, updating the pointer from A to B so that it points to B′, and releasing the lock. As an alternative to locking, other techniques such as non-blocking synchronization, transactional memory, or a designated update thread could be used to serialize data updates. All subsequent (post update) readers that traverse the linked list, such as the reader r2, will see the effect of the update operation by encountering B′. On the other hand, the old reader r1 will be unaffected because the original version of B and its pointer to C are retained. Although r1 will now be reading stale data, there are many cases where this can be tolerated, such as when data elements track the state of components external to the computer system (e.g., network connectivity) and must tolerate old data because of communication delays.
At some subsequent time following the update, r1 will have continued its traversal of the linked list and moved its reference off of B. In addition, there will be a time at which no other reader process is entitled to access B. It is at this point, representing expiration of the grace period referred to above, that u1 can free B, as shown in FIG. 1D.
FIGS. 2A-2C illustrate the use of read-copy update to delete a data element B in a singly-linked list of data elements A, B and C. As shown in FIG. 2A, a reader r1 is assumed be currently referencing B and an updater u1 wishes to delete B. As shown in FIG. 2B, the updater u1 updates the pointer from A to B so that A now points to C. In this way, r1 is not disturbed but a subsequent reader r2 sees the effect of the deletion. As shown in FIG. 2C, r1 will subsequently move its reference off of B, allowing B to be freed following expiration of the grace period.
In the context of the read-copy update mechanism, a grace period represents the point at which all running processes having access to a data element guarded by read-copy update have passed through a “quiescent state” in which they can no longer maintain references to the data element, assert locks thereon, or make any assumptions about data element state. By convention, for operating system kernel code paths, a context (process) switch, an idle loop, and user mode execution all represent quiescent states for any given CPU (as can other operations that will not be listed here). As further explained below, in some read-copy update implementations, all reader operations that are outside of an RCU read-side critical section are quiescent states.
In FIG. 3, four processes 0, 1, 2, and 3 running on four separate CPUs are shown to pass periodically through quiescent states (represented by the double vertical bars). The grace period (shown by the dotted vertical lines) encompasses the time frame in which all four processes have passed through one quiescent state. If the four processes 0, 1, 2, and 3 were reader processes traversing the linked lists of FIGS. 1A-1D or FIGS. 2A-2C, none of these processes having reference to the old data element B prior to the grace period could maintain a reference thereto following the grace period. All post grace period searches conducted by these processes would bypass B by following the links inserted by the updater.
There are various methods that may be used to implement a deferred data update following a grace period. One commonly used technique is to have updaters block (wait) until a grace period has completed. This technique has been used to implement a form of read-copy update known as SRCU (Sleepable RCU) wherein readers are allowed to sleep within RCU protected critical sections. The technique contemplates that an updater of a shared data element will first perform an initial (first phase) data update operation that creates the new view of the data being updated. Then, at a later time, the updater performs a deferred (second phase) data update operation that removes the old view of the data being updated. An RCU subsystem representing a set of primitives that can be called by readers and updaters is used to monitor per-processor quiescent state activity in order to detect when each processor's current grace period has expired. As each grace period expires, deferred data updates that are ripe for processing are executed.
The RCU subsystem primitives that readers can invoke in order to facilitate grace period detection may include a pair of fast path routines used by the readers to register and deregister with the RCU subsystem prior to and following critical section read-side operations, thereby allowing the readers to signal the RCU subsystem when a quiescent state has been reached. The rcu_read_lock( ) and rcu_read_unlock( ) primitives of recent Linux® kernel versions are examples of such routines. The rcu_read_lock( ) primitive is called by a reader immediately prior to entering an RCU read-side critical section and the rcu_read_unlock( ) primitive is called by the reader upon leaving the RCU read-side critical section. In some RCU implementations, this type of grace period detection is implemented using a pair of counters and an index. One counter of each counter pair corresponds to a current grace period generation and the other counter corresponds to a previous grace period generation. The index indicates which counter is current. When a reader enters an RCU read-side critical section, it atomically increments the counter identified by the index that corresponds to the current grace period. Then the reader atomically decrements the counter when it leaves the RCU read-side critical section. Grace period advancement and deferred data element update processing will not be performed until it is determined that the reader has performed the counter decrementation, thereby ensuring that the data element can be freed without incident.
When an updater performs a data element update, it starts a new grace period by changing the index to “flip” the roles of the counters. Additional operations may be performed by the updater to ensure that readers are aware of the counter flip and do not mistakenly manipulate the wrong counter, such as by maintaining a bias value on the current counter. New readers that subsequently enter their RCU read-side critical sections will now use the “hew” current counter while the old readers that are using the non-current counter will periodically exit their RCU read-side critical sections, decrementing the non-current counter as they do so. When the non-current counter is decremented to zero, indicating that all readers have left their read-side critical section, the previous grace period is deemed to have expired and the updater may free the stale data element that resulted from the data element update.
The foregoing update processing can produce significant update-side latencies, even when there are no RCU read-side critical sections in progress. Updating the index to perform the counter flip, setting a bias value, and testing the non-current counter for zero incurs processing overhead. The latency is due to the fact that these mechanisms are designed to favor read-side performance and scalability. They therefore minimize the coordination required on the part of readers. For example, if the updater does not take steps to ensure that readers are manipulating the correct counter, a reader could end up incrementing a counter that has just been switched to the non-current state. This means that updaters do not get to wait only until the non-current counters reach zero; they must also wait until they can be sure that there are no readers who are just about to increment the non-current counter. This can be problematic in cases where RCU read-side critical sections are either extremely short or bursty, such that there is a high probability that updates will occur when there are no readers present. In such cases, the above-described RCU implementation will unnecessarily delay updates.
An RCU implementation known as QRCU represents a prior art solution to this problem. In QRCU, the updater acquires a lock on the counters to exclude other updaters, and performs a check of the current counter to see if it indicates the presence of any readers within an RCU read side critical section. If the current counter indicates that no readers are present, the updater releases the counter lock, exits from the grace period detection sequence, and immediately frees the stale data element that resulted from the update operation that initiated grace period detection. On the other hand, if the counter indicates that a reader is engaged in RCU critical section processing, the updater performs conventional slow path grace period detection by flipping the counters, transferring a bias value from the non-current counter to the (new) current counter, releasing the counter lock, and blocking until the non-current counter decrements to zero. Although this solution decreases updater overhead and latency in the absence of readers, there is still delay associated with acquiring and releasing the counter lock.
It is to solving the foregoing problem that the present invention is directed. In particular, what is required is a read-copy update technique that reduces updater grace period detection overhead in cases where RCU read-side critical sections are short or bursty, yet which avoids the overhead associated with locking. These requirements will preferably be met in a manner that avoids excessive complexity of the grace period detection mechanism itself.