Various protocols exist for data transfers within a storage environment. One such protocol is Serial Attached SCSI (SAS), which is a point-to-point serial protocol that is an update to the previous Parallel SCSI (Small Computer System Interface) bus technology. SAS is standardized to include bus speeds identified as SAS1, SAS2, and SAS3, which have a speed per “lane” of 3 Gb, 6 Gb, and 12 Gb respectively, and thus, provide an approximate throughput of 300 MB/sec, 600 MB/sec, and 1200 MB/sec respectively. Accordingly, it would take two devices performing transfers concurrently at approximately 150 MB/sec to saturate the bandwidth of a single SAS1 lane. Although SAS provides multiple paths with multiple lanes to a storage device to increase bandwidth, system configurations often include multiples groups of storage devices that are daisy-chained together, and accordingly, such bandwidth may become a critical resource in such configurations.
Despite bandwidth being a critical resource, traditional SAS systems often rely on static path assignments when transferring data to storage devices. In traditional configurations, these assignments are typically defined during boot-up and updated only in response to a detected failure. Accordingly, such systems do not contemplate using a less congested path at run-time, which invariably results in underutilization of bandwidth as one path may remain idle while another path is saturated. Moreover, in order to minimize processing, traditional SAS systems often do not perform any scheduling of data transfers. Instead, transfer requests are simply stacked until a threshold number is reached without regard to the size of each transfer. Accordingly, a mere 1-sector (e.g. 512 bytes) request is treated in a similar manner as, for example, a considerably larger half-megabyte request (e.g. 500 Kb). Accordingly, traditional SAS storage configurations often suffer from inefficiencies with respect to bus allocations.