It is known to provide locks permitting both exclusive and shared holds for computer resources such as data structures, work queues, devices, adapters, files, databases, and directories to enable a computer program or program function to access each resource. For example, a computer program (executing in either a single processor or multiprocessor environment) can obtain an exclusive hold of a lock on a file to update the file. The exclusive hold prevents any other program from accessing the file while it is being updated, even to read the file. This prevents the other programs from accessing stale or inconsistent data in the file. However, if one or more programs only want to read a file (which is not currently locked exclusively), they can all concurrently obtain a shared hold of the lock, and concurrently read (but not update) the file because none of these programs will change the data. Some lock management operations can be performed by a macro which expands inline to generate code. Such a macro can handle most common cases. Other lock management operations can be performed by a program subroutine to which the macro delegates processing in the remaining cases. The overhead of the subroutine linkage is thereby avoided in the common cases.
In some cases, a thread of execution within a program or program function waits, by looping or “spinning”, until a lock it requests becomes available and is granted. In such a case, the lock is considered to be a “spin lock.” In other cases, a thread requests a lock and is deferred until it is granted. In the interim other threads of the same or other programs or program functions can execute on the processor. Typically, a spin lock is used when there are multiple processors in a single computer executing different threads, and the resource is accessed in non pre-emptable portions of the threads while protected by the lock.
For example, there can be multiple applications, a single operating system and multiple processors in a computer. Each processor has its own dispatcher function and scheduler function, and there is a common work queue in shared memory. The dispatcher or scheduler program function executing on each processor may need a lock to access the common work queue. Typically, such a lock would be a spin lock, and the dispatcher or scheduler program function would spin, waiting for the lock. In the simplest implementation, spin locks are not necessarily granted in the order that they are requested. Instead, they are obtained by whichever processor first happens to find the lock in an available state. However, when there are multiple different programs executing on the same processor (in a time shared manner), each program typically requests a deferring type of lock. Typically, requests for deferring types of locks are queued and granted in the order requested, when the lock becomes available. A lock manager release function known in the prior art typically removes from the front of the queue either a single exclusive request or a chain of consecutive shared requests, and grants the lock to those requestors upon release by its current holder.
One problem with known types of lock managers occurs when there are multiple sequential/staggered requests for shared holds on a lock, and requests for shared holds continue to be granted whenever the lock state permits it. In this case, requests for a shared lock will continue to be granted as long as there is at least one outstanding shared hold that has not yet been released. Requests for exclusive holds will be delayed during this period. For example, a first request for a shared hold occurs at time “0” and the requestor holds the lock for 1 second. A second request for a shared hold on the same lock occurs at time “0.9” seconds, and is granted and the requestor holds the lock for 1 second, i.e. until time “1.9” seconds. A third request for a shared hold on the same lock occurs at time “1.7” seconds, and is granted and the requestor holds the lock for 1.5 seconds, i.e. until time “3.2” seconds. A fourth request for a shared hold on the same lock occurs at time “3.0” seconds, and is granted and the requestor holds the lock for 1 second i.e. until time “4.0” seconds. In such a case, a request for an exclusive hold on the same lock made at time “0.5” seconds will not be granted until the string of overlapping shared holds are all released and there is no other request for a shared hold during this period. This prolongs the time that the lock is unavailable for an exclusive hold. This problem is called “livelock” and can “starve” a request for an exclusive hold, even when the request for the exclusive hold is received before other requests for a shared hold on the same lock are received (and granted). In the foregoing example, the request for the exclusive hold made at time “0.5” seconds will not be granted until at least time “4.0” seconds.
In the past, the livelock problem has been addressed by strictly ordering the lock requests by arrival time, and granting them first in, first out. However, in a virtualized environment this can lead to inefficiencies because the processor which is next entitled to the lock may not be dispatched by the underlying hypervisor when the lock becomes available, whereas other processors may be. This will cause such dispatched requestors to continue to spin when they could make immediate use of the lock if it were granted.
A second problem with strictly ordering lock requests by arrival time is that it is inefficient in batching together requests for shared holds of a lock, if exclusive and shared requests are interleaved. As an example, suppose that a lock is not held, and a request is made and granted for an exclusive hold of the lock at time “0” seconds. Also assume that all lock holders will hold the lock for 1 second. Now assume that before time “1.0”, there is a shared request, followed by an exclusive request, followed by a shared request, followed by an exclusive request. In this example, a strictly ordered lock manager at time “1.0” would handle the release of the initial exclusive hold, and grant a shared hold that would last for 1 second. At time “2.0” the shared hold would be released and an exclusive hold would be granted. At time “3.0” the exclusive hold would be released and another shared request would be granted. At time “4.0” the shared hold would be released an exclusive request would be granted. And finally, at time “5.0” the exclusive hold would be released. However, if at time “1.0” the two pending shared requests were batched and granted together, the next exclusive request would still be granted at time “2.0”, but the third exclusive request would be granted earlier, i.e. at time “3.0” and at time “4.0” would release the exclusive hold, finishing one second earlier. Thus, a strictly ordered lock manager can benefit from batching shared requests only when the shared requests arrive one after the other with no exclusive requests in between.
Another problem with known types of lock managers occurs when there are multiple sequential requests for exclusive holds of a lock, followed by multiple overlapping requests for shared holds of the lock. In this case, the multiple requestors for the shared holds all have to wait for each of the prior requests for exclusive holds to be satisfied. In other words, many requestors of shared holds that can be satisfied at least partially in an overlapped manner have to wait for each request for an exclusive hold that each satisfies only a single requestor.
The following lock-granting algorithm was also known. According to this algorithm, a pending request for an exclusive hold on a lock takes precedence over any pending requests for a shared hold on the same lock. This algorithm is optimum when exclusive requests are rare, but can cause lengthy waits for shared requests when exclusive requests are more common.
Accordingly, an object of the present invention is to reduce average wait time for requestors, particularly when there is a mix of shared requests and exclusive requests.