1. Field of the Invention
The invention relates to a data writing method and, in particular, to a method for improving data writing efficiency of a storage system.
2. Related Art
A conventional redundant array of inexpensive disks (RAID) is schematically shown in FIG. 1A. It includes a host 11, a controller 12, and a physical disk drive array 13. The physical disk drive array 13 has several disk drives D1, D2, D3, and D4. The host 11 is coupled to the controller 12, which is then coupled to the disk drives D1, D2, D3, and D4. The host 11 accesses data in the disk drives D1, D2, D3, and D4 via the controller 12. The controller 12 usually temporarily stores data, which is received from the host 11 (e.g., data W1 and W2) and waiting to be written, in its cache unit 121. Afterwards, the waiting-to-be-written data are read from the cache unit 121 and written into the corresponding target disk drives (e.g., disk drives D1 and D2).
FIG. 1B is a flowchart showing the action of writing data from the cache unit to a storage medium in prior art. It takes RAID level 5 as an example, for whose description please refer to FIG. 1A simultaneously. Suppose that the target written blocks of data W1 and W2 belong to the same stripe. When data W1 and W2 are going to be read from the cache unit 121 of the controller 12 to the target disk drives D1 and D2 (step S100), the controller 12 first computes a new parity data P1 associated with data W1 and W2 (step S102) and stores a parity updating log in the cache unit 121 or another non-cache unit (not shown). The parity updating log records the stripe information of where data W1 and W2 are located, provided for consistency corrections of parity data after the power is turned back on if the data are not completely written into the disk drives due to a power cut or power failure (step S105).
Afterwards, the controller 12 writes data W1, W2 along with the new parity data P1 to the target disk drives D1, D2, and D4 (step S110). If data W1, W2 and the new parity data P1 are successfully written into the corresponding target disk drives D1, D2, and D4, the controller deletes the previously stored parity updating log (step S115). A writing completion message is then returned to the host 11 (step S120).
However, data may not be able to be written to completion rapidly in certain situations. For example, if one of the disk drives D2 is performing a read request and the read request cannot be completed for a while, the succeeding write request associated with the written data W2, sent by the controller 12 to the disk drive D2, may stay waiting in the queue all the time and cannot be performed. Even if other data write requests have been completed, the controller 12 will still stay waiting and cannot return a writing completion message to the host 11 until the completion of the write request for writing data W2. At the time all the related data W1, W2, P1 and the parity updating log will also keep occupying the limited memory space. When the data in the memory space are not released all along, no more memory space can be spared to receive new data. As this stripe being currently updated cannot finish the writing process, it is impossible to turn in another write request for the stripe. In some cases, some disk drives may run with a slower speed while accessing data on certain blocks, or the disk drive is retrying a read request so that no write request can be processed temporarily. The data being re-read may be able to be generated from other related data. However, the data of the write request have to be actually written into the target disk drive to completion. Therefore, the controller 12 has to keep waiting until the disk drives with a lower access speed complete the write request, and then return a writing completion message to the host 11, followed by the process of deleting data stored in the cache unit 121 and their parity updating log.
Although the above-mentioned RAID techniques can combine smaller physical disk drives into a logic medium unit of a larger capacity, higher tolerance, and better efficiency for a host system to use, further enhancing the processing efficiency of the storage system is still one of the most important issues in the field.