The present invention relates generally to the field of computational data storage, and more particularly to data processing in serial read mode.
In enterprise storage systems, virtual capacity is often broken down to smaller logical chunks called partitions. Every partition does not only contain the data written to the storage system, but has metadata that describes it. Often storage systems choose to store that metadata in-line in the partition, therefore increasing the partition size from just the data it contains to “data+metadata” size. A different approach is to store the metadata in a completely different structure. This structure often is in random-access memory (RAM) to allow fast metadata access and to allow quick decisions regarding user inputs/outputs (I/Os) and administrative operations (e.g., volume delete).
Another common feature in distributed storage systems is to group every N-partitions to a common storage entity (e.g., a slice) where different slices are distributed across all available computes powers in the system, but all partitions within the same slice are managed by a common compute power. This ensures that I/Os to various parts of the virtual capacity can be handled simultaneously by different threads and therefore different cores in the system. In order for a thread to handle I/Os for partitions of a given slice, then the thread is also the owner of metadata of the slice. Since the other computes are not servicing requests for the partition of that given slice, then those computes do not need access to the metadata, thus allowing the metadata to be broken into different parts and distributed across the system. This distribution mechanism is assuming that hosts will be accessing various parts of the virtual capacity at the same time, therefore having multiple cores handling the workload at all times.