In a conventional file system both user data objects and metadata objects are stored on a persistent storage such as an inexpensive disk drive. A fast cache constructed of volatile memory is used to temporarily store a subset of the user data objects and metadata objects. The fast cache also reduces access time between a processor attempting to access the user data objects and metadata objects and the same objects on the persistent storage.
The fast cache stores only a subset of the total set of user data objects and metadata objects. The logic controlling the fast cache may recognize the need to store a new metadata object in the fast cache, but determine that there is no room in fast cache to store the object. When this occurs, the least recently used (LRU) object in the subset of data objects and metadata objects is replaced. If the LRU object has not been updated (i.e., dirtied) it is simply discarded, as a valid copy of the object will exist on the underlying persistent storage. If the object has been dirtied, the object will be written to persistent storage prior to the new object replacing the LRU object. However, the storage processor may partition the cache in any number of ways to store objects and use any number of cache replacement algorithms to perform object replacement within the cache.
In recovery of file systems, the storage processor uses a recovery tool, such as “fsck”, to scan and recover the set of metadata objects of the file system. When internal inconsistencies between individual metadata objects are found, the metadata objects with inconsistencies are updated. Given the large number of metadata objects in any file system, the number of metadata objects loaded, discarded or written back to persistent storage is great. In many cases, the same metadata object would be loaded, dirtied and then flushed while running the file system recovery tool.