Data storage utilization is continually increasing, causing the proliferation of storage systems in data centers. In particular, the size of the applications and the data generated there from is increasing. Moreover, systems/users are backing up multiple copies of a given set of data to maintain multiple versions. For example, snapshots of a given database stored in a server are copied and stored over time, thereby allowing a given version/snapshot of a set of data to be restored. In order to increase backup performance (i.e., reduce the amount of time it takes to complete a backup process), backup applications perform parallel streaming of the backup data from the source storage system to the target storage system.
In order to perform parallel streaming, the backup application splits the backup data into multiple data sets (commonly known as savepoints). Depending on how many save streams are available, the backup application allocates each savepoint a predetermined number of save streams. Conventionally, the number of save streams allocated to each savepoint is static. They do not, for example, take into account the differences in the sizes of savepoints to be backed up with limited system parallelism resources. Further, once the save streams have been allocated and the backup process for a savepoint has started, conventional backup applications do not make use of save streams that have become available because another savepoint has finished backing up. Thus, when backing up several savepoints with uneven density, parallel streaming is not fully utilized.