1. Field of the Invention
The present invention relates to a storage apparatus that virtualizes the capacity, and more particularly, to technology that can be effectively applied to a cache control method in the storage apparatus.
2. Description of the Related Art
Generally, storage apparatuses that are configured by a plurality of disks use a RAID (Redundant Arrays of Independent Disks) configuration in which data is distributed and maintained over a plurality of disks for providing users a data storage space that has high capacity and high reliability and allows high-speed access.
When a storage apparatus is introduced, design of capacity, in which the capacity of a local unit (LU) needed in the future is predicted in advance by a supervisor of the storage based on the operated form, is needed for creating the LU that is a storage space of user data. However, when the LU having an excessive size is allocated, there are problems such as low use efficiency of the physical capacity and an increase of TCO due to excessive investment.
Thus, as a technique for solving the above-described problems, a technique for providing a virtual LU space having an unlimited capacity for a host computer with only a minimal physical area adjusted to the access range of the LU used has been proposed (for example, JP-A-2003-15915). The above-described technique has an aspect that a physical area is not allocated at a time when the LU is created but a physical area is dynamically allocated from a group of disks to a part of the LU that is accessed each time the part is accessed.
In general storage apparatuses, a hard disk having a low-speed and high capacity is frequently used. Thus, in order to increase the speed of I/O processes, a cache memory (hereinafter, simply referred to as a cache) having a high speed and low capacity is loaded in the general storage apparatuses. However, since a cache memory area is much smaller than a disk area, it is difficult to continuously maintain specific data in the cache memory. Thus, a technique (hereinafter, referred to as cache residence) for implementing a high-speed process by intentionally having a specific program or specific data to be resident in the cache memory has been proposed (for example, JP-A-2005-309739).
In addition, although not written in JP-A-2005-309739, there is a technique for having data of all the specific LUs to be resident in the cache memory.