1) Field of the Invention
The present invention relates to a technique of storing data.
2) Description of the Related Art
FIG. 16 is a block diagram of a configuration of a conventional storage device. Each of client devices 101 to 10n shown in the figure is a computer terminal that has an access, via a network 20, to a storage device 30, and performs processes such as reading of a data from the storage device 30 and writing of a data to the storage device 30.
The storage device 30 is a device that includes a disk 34 and a cache memory 37 as storage media, and stores large volumes of data. A network interface card 31 in the storage device 30 controls a communication with the client devices 101 to 10n in accordance with a predetermined communication protocol. A network driver 32 controls the network interface card 31.
The disk 34 is a storage medium that has characteristics of having a higher capacity and of being slower in terms of an access time in comparison to the cache memory 37. A disk driver 35 controls the disk 34. A disk controlling unit 36 executes read/write control of a data at the disk 34, and RAID (Redundant Array of Independent Disks) control of the disk 34. The RAID control means a control to enhance reliability and process speed by using more than one disk 34 simultaneously as one disk.
The cache memory 37 is, for example, an SRAM (Static Random Access Memory), and has characteristics of taking shorter access time but of having a smaller capacity than the disk 34.
The cache memory 37 stores a part of a data that is stored in the disk 34. A cache controlling unit 38 controls an access to the cache memory 37. A protocol processing unit 33 controls each part corresponding to a read/write request from the client devices 101 to 10n.
When a read request is received from, for example, the client device 101, the cache controlling unit 38 determines whether the data that is requested to be read is stored in the cache memory 37. If the data is present in the cache memory 37 (a cache hit), the cache controlling unit 38 reads the data from the cache memory 37 and passes the data to the client device 101.
On the other hand, if there is no cache hit, the disk controlling unit 36 reads the data from the disk 34 through the disk driver 35, and passes the data to the client device 101. Thus, when the data is read from the disk 34, the access time is longer than a case when the data is read from the cache memory 37.
When a write request is received from, for example, the client device 101, the disk controlling unit 36 writes the data requested to be written to the disk 34 through the disk driver 35 and also passes the data to the cache controlling unit 38. The cache controlling unit 38 writes the data to the cache memory 37 so that the same data is written to both the cache memory 37 and the disk 34.
When writing the data, if there is no sufficient memory area in the cache memory 37, the cache controlling unit 38 prepares an area by erasing some data in the cache memory 37, and then writes the data in the area prepared.
In the conventional storage device 30, only one node bears all functions such as control of the disk 34 and the cache memory 37.
The disk 34 has a characteristic of performance that a random access speed is slow. Consequently, performance of the conventional storage device 30 degrades unless there is a cache hit in the cache memory 37. On the other hand, the performance of the network 20 has been improving day-by-day; therefore, the performance of the storage device 30 has been a bottleneck of a whole system not only in the random access, but also in a sequential access.
For example, the performance of the disk 34 is about 50 megabyte (MB)/s to 80 MB/s in the sequential access, but is not even 10 MB/s in the random access. Moreover, an average seek time of the disk 34 is about 4 milliseconds (ms).
Whereas, the network 20 can give a high performance such as a bandwidth of 120 MB/s and a response time of 100 microseconds (μs) to 200 μs. If an accelerating technology such as a high-speed interconnect is applied, both the bandwidth and the response time can be improved by an order of magnitude. Thus, it is clear that the capability of the storage device 30 is lower than the capability of the network 20.
Moreover, the storage device 30 can give full performance only when a data to be accessed is present in the cache memory 37, in other words, when there is a cache hit. A cache hit rate depends heavily on a memory size of the cache memory 37. Therefore, the larger the memory size of the cache memory 37 is, the higher the cache hit rate and the better the performance of the storage device 30 becomes.
However, in the conventional storage device 30 the memory size of the cache memory 37 is limited, because, only one node bears all functions such as control of the disk 34 and the cache memory 37. Moreover, memory area usable for the cache control obviously becomes limited, because, the conventional storage device 30 consumes memory area in the cache memory 37 for control other than the cache control.
Consequently, there is a problem that the bottleneck (lowering of the access speed to a data) in a system exists because the memory area in the cache memory 37 that is necessary for improvement of the performance is limited.