On a computer system, nearly all aspects of process execution are managed by the operating system kernel which is the software forming the core or heart of an operating system. For example, the kernel is responsible for scheduling the execution of processes and managing the resources made available to and used by the processes. Processes may be typical programs such as, for example, word processors, spreadsheets, games, or web browsers, or other programs. Processes may also be underlying tasks executing to provide additional functionality to either the operating system or to the user of the computer. Processes may also be additional processes of the operating system for providing various functionalities to other parts of the operating system. These functionalities include, for example, networking functionality and/or file sharing functionality.
The kernel also has the responsibility of controlling access by multiple processes that are running in a computer and that are accessing a shared resource, so that the processes do not simultaneously use the shared resource and do not cause a fault in the kernel. As an example, a shared resource may be an available physical memory to be used by a process.
Various mechanisms may be used to control access to a shared resource. The prior solution in the execution of multiple processes has been to use a shared variable in a shared resource between the two processes. One of the processes is made to spin until the process sees the value of the variable change, while the other process changes the contents of the variable. As known to those skilled in the art, when a process spins, the process simply waits in a loop and repeatedly performs a check for a lock to become available. As also known to those skilled in the art, a lock is a mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. Locks are one way of enforcing concurrency control policies. Threads are similar to processes, since both represent a single sequence of instructions executed in parallel with other sequences. Threads permit a program to split itself into two or more simultaneously executing tasks. As an example, one thread may monitor a graphical user interface, while other threads may perform a long calculation in the background. As a result, the application more readily responds to the user's interaction. Multiple threads typically share a memory and other resources directly. On operating systems that have special facilities for threads, it is typically faster for the operating system to context switch between different threads in the same process than to switch between different processes.
A test for proper locking is required in order to detect for any missing locks or misplaced locks. Improper locking introduces an opportunity for a failure if dependent code paths execute at the same time. Programmers go to extraordinary measures to eliminate these failure opportunities which are known as locking windows.
A missing lock situation typically occurs when the code programmer does not code a lock between multiple dependent processes. As a result, each process will execute its task while ignoring the other process or processes. Missing locks are detected by use of a testing method that use the “butterfly testing pattern”.
In prior methods, a broad butterfly testing method is used, where random skew patterns are used. However, the test coverage in these prior method is not sufficiently broad because the set of skew values is few in number.
A misplaced lock situation occurs when the step of obtaining or releasing a lock by a process is placed in an incorrect statement in the code. As an example, if the lock is misplaced, then a process might release the lock prior to the correct release step. Misplaced locks can be detected based on abrupt changes in behavior of the software. These abrupt changes are what are known as a “knife-edge”. Testing of skew times around the knife-edge is then performed to verify a misplaced lock.
Synchronization is a prerequisite to executing two or more paths in parallel. For normal kernel behavior, syncronization is very rarely needed. Ideally the paths in a normally behaving kernel are never syncronized. Locks are used to obtain correct behavior for the rare cases where the paths by coincidence execute in parallel. As discussed below, the invention deals with forcing these rare cases to occur, for testing purposes. In a normally executing system, these cases rarely occur.
Once this synchronization has occurred, both processes are made to spin a specified and perhaps a different amount of time. After completing a post synchronization spin loop, each of the processes will issue a system call that causes the execution of one of the two code paths. The disadvantage of this previous method is the lengthy code path from the system call entry to the code to be executed in parallel. Depending mostly on the number of I-cache (instruction cache) misses, the amount of time to permit the code to be executed in parallel is not deterministic. In other words, there is no guarantee that two code paths will be executed concurrently. For example, executing two code paths within about twenty (20) clocks has never been obtained by use of previous methods, even with thousands of attempts of execution. Typically, in previous methods, two code paths execute within about 500 clocks, at an occurrence of less than approximately 10% of the time.
Therefore, the current technology is limited in its capabilities and suffers from at least the above constraints and deficiencies.