At least two determinants of input/output (I/O) performance for a storage system are the available data transfer rate and the latency of data access. The latter may be relevant even for large files because it may not be possible to store them in a single linear arrangement. Rather, for various practical reasons, the file may be fragmented into pieces that are stored in disparate locations on a disk (or disk array). Reading or writing each fragment to the storage medium involves an amount of time to physically move the disk's head to the start of the new fragment. This may be referred to as a “seek penalty”. This overhead is exacerbated in deduplicated storage systems in which newly stored data blocks for which copies already exist in the system are replaced by references to the previously stored copies. This means that the number of fragments encountered during a linear read of a stored object is determined not only by the constraints imposed by the storage subsystem, but also by the extent that data can be deduplicated. In many instances, improvements in the deduplication effectiveness improve the effectiveness of data compression but result in degradation of the retrieval performance. This is at least partly due to the fact that, at least for current technology disks, seeking to data is more of a bottleneck than transferring a sequential data block.