Archival storage needs to protect against data loss as inexpensively as possible. One known approach to doing this is to use distributed erasure coded storage, in which a system with N sites stores N−1 sites' worth of data and one site worth of redundancy. If a disaster occurs, the typical solution is to read all of the remaining sites and transfer the data to the new site, which uses the information to reconstruct the data lost when the site was destroyed. As larger amounts of data are stored, the length of time to transfer data to the new site after a disaster increases. In order to decrease this transfer time, one solution is to purchase or lease higher bandwidth network connections to the new site. Even so, the amount of recovery time may be larger than desired. Therefore, there is a need in the art for a solution which overcomes the drawbacks described above.