Effective resource management is essential to an efficient computing environment. Many resource management techniques have been attempted and implemented, but each approach suffers due to trade-offs in terms of throughput and flexibility. Traditional approaches include homogeneous resource management, which has the drawback that only one type of resource may be managed at a given time. Existing approaches also include static allocation, which allocates resources prior to their use. However, this may consume valuable memory resources when, in fact, such resources may not actually be required by a program.
More advanced techniques make use of threads. Threads are a time-sharing facility used in modem multi-tasking operating systems and programming languages to make programs appear more responsive. Using multiple threads, a program can appear to perform multiple tasks at the same time.
Threading comes in two main types, namely, pre-emptive and co-operative. With pre-emptive threading, the operating system is responsible for allocating CPU time to each thread, whereas, with co-operative, a thread is responsible for yielding its time slice to the operating system. In both cases, when a thread needs to wait for a very slow resource compared to the CPU (like reading a block of data from a disk), the operating system automatically yields the CPU to another thread that is more able to perform work and this makes the system more responsive. When a multi-threaded program accesses shared or common data, it is extremely important that access to this shared data is serialized. Failure to serialize could be problematic as one thread may be in the middle of an update while another thread pre-empts this thread. The thread that takes over will have the shared data in an unknown state.
Certain existing resource management configurations utilize a single resource instance which is shared between all threads. A typical system based on this basic approach is performance limited, however, since only one thread can access the resource at any time. Serialized access to the resource needs to be implemented by the resource. This makes scaleable resource development considerably more complex and prone to deadlock situations that are extremely difficult to track down and fix. As an alternative, resources may be explicitly allocated per client application thread. In the traditional approach, however, each thread experiences the overhead of a new resource allocation rather than the resource being shared/reused by multiple clients.
A further option is based upon a decentralized management of resources. In this case, resource failure is handled on a per-resource basis, rather than by the resource manager, and resource limitations are handled by application logic rather than being treated transparently.