Buffer caches are used in computer storage systems to facilitate data storage and rapid access to stored data. Such computer storage systems may include controller data fault recovery systems. The buffer caches may include data buffers and associated header buffers, also known as descriptor buffers. The size of such data buffers is generally a large power of 2, such as 32 Kbytes. In contrast, the size of such descriptor buffers is generally a small power of 2, such as 256 bytes. As a result, it would be desirable to efficiently utilize such very much larger data buffers, for example, in systems in which access to data objects is facilitated by the data buffers.
However, in current data access architecture in which buffer caches are used, there is inefficient use of a data buffer and an associated descriptor buffer because that data buffer remains together with that descriptor buffer (i.e., in a “paired” relationship) for the life of the cache. Thus, the paired relationship may be described as being “permanent”, or “permanent pairing”, or “permanent paired architecture”. Referring to FIGS. 1A and 1B, use of this permanent paired architecture is illustrated by a flow chart 100 (FIG. 1A) in which an operation 101 receives a data access request (or “DAR”) from any system IOM 102 (FIG. 1B). Such data access request may be for data not previously stored (i.e., for new data), in which case the data access request may be referred to as a new data access request, or new access. Such data access request may also be for data previously stored (i.e., for old data), in which case the data access request may be referred to as an update request. In an operation 103, a determination is made as to whether any paired data buffer 104 and descriptor buffer 105 (referred to as a pair 106) are free. A free situation of the pair 106 is indicated in FIG. 1B by the paired data buffer 104 and descriptor buffer 105 being together on one common free buffer list 107. Only if both buffers 104 and 105 of the pair 106 are free will the determination “yes” be made and the methods move to an operation 108 to process the data access request. Else, an operation 109 returns to operation 103.
It may be understood that in the method and architecture of respective FIGS. 1A and 1B, the completion of processing of the data buffer 104 generally is intended to occur, and occurs, well before the completion of processing of the descriptor buffer 105 of the pair 106. However, in such architecture and method, neither the data buffer 104 nor the descriptor buffer 105 of the pair 106 is put on the common free buffer list 107 until processing of the (usually last to-be-processed) descriptor buffer 105 is complete. Thus, both buffers of the pair 106 remain unavailable (not on the common free buffer list 107 ) for use in the buffer cache 110 to respond to the next data access request until completion of the (usually) last-to-be completed processing of the descriptor buffer 105. The difference in the duration of such separate processing of the data buffer 104 and of the descriptor buffer 105 of the pair 106 may be as much as several seconds, for example.
With the permanent pairing architecture and method of respective FIGS. 1B and 1A, it may be understood that there is a need to more efficiently use the buffer cache 110. In other words, there is a need to substantially reduce the storage size (and thus the cost) of buffer caches, while still allowing the same or greater number of read/write operations in a unit of time as compared to buffer caches in the current permanent paired architecture, e.g., that operates as shown in FIGS. 1A and 1B.