1. Field of the Invention
The present invention is directed to resource sharing in a computing system and, more particularly, to how scarce resources are allocated.
2. Description of the Related Art
A network filesystem, such as NFS from Sun Microsystems, Inc. of Santa Clara, Calif., provides access to files in centralized storage by client computer systems in a client-server operating environment. Such network filesystems generate resource acquisition requests for read and write access to data and to obtain metadata or information about the data, such as data structure, etc.
The resource acquisition requests are generated by both local filesystems that access storage connected to the system executing the process that needs access to the data, and network filesystems for remote data accessed via a network connected to the system executing the process that needs access to the data. Execution of a resource acquisition request assigns one or more resources of which there are a fixed number, such as handles for buffers, pages of memory for storage of actual data, structures required for data caching, etc. Conventionally, the resources are assigned as the resource acquisition requests are generated, whether they are for a local filesystem or a network filesystem, until a maximum number of resources are in use. Due to the remoteness of the data to be accessed, it takes longer to execute network resource acquisition requests. As a result, it is not uncommon for a single or a group of network filesystems to monopolize all of the resources and “starve” other filesystems, particularly local filesystems, from having their resource acquisition requests executed.
Specifically, when multiple read operations are interspersed with heavy write traffic, a client node may experience file and directory read delays due to scheduling difficulties. At any time, write requests can greatly outnumber read and read-ahead requests, and all new requests are placed at the end of a request queue. Since an application must wait for a file server to read data before it can continue, read throughput is extremely latency sensitive. Use of read-ahead instructions attempts to reduce the latency by queuing the next block before the application requests it. However, the read-ahead requests are conventionally placed on the same first-in, first-out (FIFO) queue as write requests. Through the use of a buffer cache, a client node can queue a large number of write requests, but only a small number of read requests at a time. As a result, an asynchronous request queue may be dominated by write requests.
Basically, there are three methods for handling a resource shortage: (A) stop using the resource until there are free resources; (B) manage accesses to the shared resource such that it is always available but may involve waiting until the resource becomes available; or (C) redesign the system such that it doesn't need the shared resource to function. Solution (A) is inefficient and solution (C) requires lots of programming and in most cases other trade-offs. Therefore, it is desirable to implement solution (B) by finding an equitable way of allocating resources that avoids starvation of one or more filesystems, while executing resource acquisition requests efficiently.