Work queues are utilized by many modern operating systems to manage asynchronous processing tasks. When an application or other system component discovers a processing task, e.g., a work item, that does not need to be completed immediately, it places the work item on a work queue. The work queue typically has one or more dedicated threads. A thread is a unit of software that can be executed by a processor or processor core. A different component manages the execution of the work queue threads, as well as other threads. When executed, the work queue threads pull items from the work queue and execute them, often according to a first in-first out (FIFO) arrangement. In this way, work items that are not time sensitive or that benefit from “batch” processing can be queued and performed when the computer has available processing resources. Examples of work items that are often queued include interrupt processing, input/output (I/O) requests, garbage collection, etc.
Many systems support both platform work queues and user work queues. Platform work queues are typically managed by the operating system and are made available to a wide range of applications and/or other system components. User work queues are created by applications and are used by a limited set of system components. User work queues are often used to avoid race and deadlock conditions that can occur with platform work queues or existing user work queues. For example, when a work item (A) requires a result of another work item (B) to complete its processing task, the work item (A) cannot be completed until after the work item (B) is completed. If the work item (A) begins to execute before the work item (B) is completed, the thread executing the work item (A) will go into a wait state. If there are not sufficient dedicated threads to process the work item (B), then the thread executing the work item (A) will remain it its wait state indefinitely, which is referred to as a deadlock condition. In another example, a work item (X) may be coded to assume that another work item (Y) has already executed and made particular changes to system data. If the work item (X) is executed before the work item (Y), system data may not be in the state expected by work item (X), resulting in a race condition.
User work queues give software developers additional control to prevent race and deadlock conditions. At the same time, however, each work queue consumes system resources. For example, each active thread for a work queue utilizes physical memory space that is either filled or reserved for the thread. As more and more work queues are used, more resources are consumed. Thus, while individual applications may be optimized overall system performance suffers.