During the process of development of computer technology, the access speed of the master memory has always been much slower than the processing speed of the central processing unit CPU such that the high-speed processing capability of the CPU cannot be fully exploited, thus causing the working efficiency of the entire computer system to be affected. To alleviate speed mismatch between the CPU and the master memory, one of the common approaches is to use cache at the storage layer. The cache is a layer-1 memory existing between the master memory and the CPU. It has a smaller capacity but much higher access speed than the master memory, which is close to the processing speed of the CPU.
However, in current actual application, although a cache is disposed between the master memory and the CPU, when a large amount of I/O requests are performed for cache, the problem of severe degradation of system performance is still likely to arise. For example, when a large amount of I/O writing operations are performed for the cache, the general practice is to write all the data into the cache without considering the size of the amount of the written data. When the amount of data written into the cache is large (for example, larger than 250K), since the capacity of the cache is small, it is easily filled such that the subsequent I/O requests have to be queued up, hence causing the performance of the whole system to decline severely.