1. Field of the Invention
The present invention relates to a cache controlling apparatus for controlling the use of the memory of a cache module in a storage system which is composed of a plurality of cache modules connected to a network and a plurality of storage modules connected to the cache modules, and a method thereof.
2. Description of the Related Art
In recent computer systems, a buffer storage device using semiconductor memories is often provided between an auxiliary storage device and a requester of information in order to increase access speed to information in the auxiliary storage device. A buffer storage device for a disk device is called a disk cache, and is often provided in a disk controller for controlling the disk device.
Here, a storage system which is composed of a plurality of storage modules corresponding to the disk devices and a plurality of cache modules corresponding to buffer storage devices connected to the storage modules is assumed to be used.
It is assumed that a disk page managed by each cache module is stored in a storage module connected to the cache module. Here, a disk page corresponds to information stored in a memory area (block) of a predetermined fixed length, and xe2x80x9cto manage the disk pagexe2x80x9d means to receive an access request for the disk page and to perform both exclusive control and consistency maintaining control, etc., over the disk page.
For example, a case where a storage system is provided with two cache modules A and B, where many access requests center on a storage module managed by the cache module A and where an access request for a storage module managed by the cache module B is rather rare, is studied.
In this case, it is desirable for the cache module A to have a greater memory capacity than the cache module B. A greater memory capacity results in a better cache hit rate, and a poorer hit rate of a cache module which rarely receives an access request is compensated for by the better hit rate of a cache module which frequently receives an access request. However, in practice, the memory capacity of each cache module is fixed in advance depending on the hardware, and cannot be modified according to the frequency of access requests.
This will become more clear if this method is compared with the LRU (least recently used) method which is a standard method for determining pages stored in a cache. As its name indicates, the LRU method leaves the recently used pages in a cache, and selects the least recently used page for replacement. Therefore, a group of selected pages are arranged in a cache in descending order of the most recent access time until the cache is filled up with pages. Here, the most recent access time indicates the access time at which access has most recently been gained, of all prior instances of access to the page.
Generally speaking, it is observed that the more recent the most recent access time of a certain page is, the higher the probability of an access request for the page is. Therefore, the LRU method exercises optimum control over replacing a page with the least possibility of being accessed.
However, when there is a difference in the frequency of requests in an access between two cache modules, as shown in the above example, and if control using the LRU method is independently exercised within the scope of the given memory capacity of each cache module, the LRU method cannot be realized for the entire system, and optimum control cannot be exercised.
In this case, the range of the most recent access times of a group of pages in a cache module with a high frequency of access requests becomes narrower than that in a cache module with a low frequency of access requests. The range of the most recent access times is a time difference between the newest time and the oldest time of the most recent access times of the pages in a cache module.
In this way, the fact that there is a difference in the range of the most recent access times between cache modules indicates that in a cache module with a high frequency of access requests, the replacement of a page which should not be made from the viewpoint of the entire system easily occurs, and that in a cache module with a low frequency of access requests, the replacement of a page which should be made from the viewpoint of the entire system rarely occurs. Therefore, optimum control is not exercised for the entire system.
An object of the present invention is to provide a cache controlling apparatus and method for performing optimum page allocation for the entire system in a storage system which is composed of a plurality of storage modules and a plurality of cache modules.
In the first aspect of the present invention, the cache controlling apparatus comprises a management unit, a transfer-out unit and a transfer-in unit, and controls the cache operation of a storage system which is composed of a plurality of cache modules connected to each other and a plurality of storage modules each connected to each cache module and storing information.
The management unit allows information managed by a first cache module of the plurality of the cache modules to be stored in a second cache module. The transfer-out unit transfers out the information managed by the first cache module from the first cache module to the second module. The transfer-in unit transfers in the information managed by the first cache module from the second cache module to the first cache module.
In the second aspect of the present invention, the cache controlling apparatus comprises a shift unit and a receiver unit. The shift unit dynamically shifts information managed by a first cache module between the first cache module and a second cache module. The receiver unit receives access requests for the information shifted to the second cache module via the first cache module.
In the third aspect of the present invention, the cache controlling apparatus comprises a management unit, a transfer-out unit, a transfer-in unit, a judgement unit and a transfer-out destination determining unit, and controls the cache operation of a storage system which is composed of a plurality of cache modules connected to each other and a plurality of storage modules each connected to each cache module and storing information.
The management unit allows information managed by a first cache module of the plurality of the cache modules to be stored in a memory of a second cache module. The transfer-out unit transfers out the information managed by the first cache module and is stored in a memory of the first cache module from the memory of the first cache module to the memory of the second cache module.
The transfer-in unit transfers in information managed by the first cache module and stored in the memory of the second cache module from the memory of the second cache module to the memory of the first cache module. The judgment unit judges whether arbitrary information in the memory of the first cache module should continue to be left in the first cache module, be discarded or be transferred out. The transfer-out destination determining unit determines to which cache module the arbitrary information is transferred out if the arbitrary information should be transferred out.