Backing up data in a conventional computer system typically involves specific backup software executed by a host computer that initiates a backup operation. In such conventional systems, data to be backed up or archived is read from a Data Storage Device (DSD), such as a Hard Disk Drive (HDD) or a Solid-State Drive (SSD). The backup software executed by the host computer may prepare a backup or archive file or perform other backup management using the data retrieved from the DSD, and then store the backup or archive file back into the DSD or into a different DSD, such as an external or remote DSD.
However, the host computer may not always be available for performing a backup of data. In some cases, a host computer may not have additional processing or memory resources available for performing a backup, as may be the case with host computers that run nearly continuously with relatively high workloads. Even in cases where the host computer has resources available to perform a backup, conventional backup management performed by the host computer can ordinarily degrade system performance by requiring the host computer to retrieve data from a DSD, allocate space in a local memory of the host computer for managing backup operations, create a backup or archive file, and store the backup or archive file in the DSD.
In addition, DSDs may use newer technologies to store more data in a given physical storage space, as compared to previous DSDs. This increase in data density can result in more read errors when attempting to read data from the DSD. In many cases, the higher data density can increase the likelihood of defects in a storage media of the DSD, or make the storage media more vulnerable to data corruption caused by environmental conditions or by writing nearby data. Accordingly, there is a need to improve data backup so that it consumes less host resources and allows for better handling of read errors.