The Peripheral Component Interconnect Special Interest Group (PCI SIG) Workgroup developed a specification that adds I/O virtualization (IOV) capability to PCIe Single Root IOV (SR-IOV) Serial Attached SCSI (SAS) adapters. IOV allows an Input/Output (I/O) device, for example a storage device, to be shared by a plurality of System Images (SI).
Modern computing and storage systems increasingly use IOV to manage IT resources through load balancing. An IOV adapter has a Base Function (BF), a Physical Function (PF) and a Virtual Function (VF). The BF manages the Multi-Root (MR) features of a MR device. The PF contains Native Single-Root IOV (“SR-IOV”) functionality. The VF is a function associated with the PF that shares one or more physical resources with the PF and other VFs associated with the same PF. These components logically connect an SI to a storage device. Native IOV requires a PCI Manager (SR-PCIM or MR-PCIM) to perform PCIe fabric discovery. A MR device also requires a PCI Manager to implement Multi-Root Aware (“MRA”) components. Each MR-PCIe root complex has its own Virtual Hierarchy (“VH”).
For a SR-IOV Serial Attached SCSI (SAS) adapter, the SAS feature provides persistent storage, but it does not optimize the utilization of this persistent storage by an SI. In addition, the typical SR-IOV SAS enabled adapter does not prevent one SI from accessing the data storage of another SI.
A known method of preventing one SI from accessing the data storage of another System Image is Logical Unit Number (LUN) Masking and Mapping. LUNs convert storage into logical storage space and differentiate between different blocks of storage. The System Image must not access or recognize other LUNs that have been assigned to other System Images. LUN Masking and Mapping prevents a server from corrupting disks or other storage belonging to other servers. For example, Windows servers attached to a Storage Area Network (SAN) will occasionally corrupt non-Windows (Unix, Linux, NetWare) storage on the SAN by attempting to write Windows storage labels to them.
One method for connecting a client having a plurality of System Images to a storage device using LUN Masking and Mapping is illustrated in FIG. 1B. This solution requires the use of a hypervisor 130. The hypervisor 130 is typically in the form of firmware. The client includes a plurality of System Images (SI0 to SIn-1) 116, 117, and 118. The hypervisor 130 connects port 140 to Virtual I/O System (VIOS) 145. The client issues I/O requests 135, 136, 137 from each SI 116, 117, 118 to the VIOS 145 through hypervisor 130 and port 140. The VIOS forwards these I/0 requests through a network connection or “fabric” 125 to the correct block storage in a storage device 120. The fabric 125 is part of the hypervisor 130 in this example. The block storage can be one or more LUNs on a SAN appliance, an internal adaptor, or other construct built on the LUNS. Unfortunately, this approach increases the path length I/O latency and processor usage. The user is also required to maintain and manage the VIOS.
SAN storage devices 121 with built in LUN masking capabilities may be used in combination with an IOV adapter 150 as illustrated in FIG. 1C. In this example, each SI 116, 117, 118 initiates its I/0 request 135, 136, 137 through a corresponding Virtual Function (VI0 to VIn-1) 145, 146, 147 to the Physical Function (PF) 148 of the IOV adapter 150. This example improves the system of FIG. 1B. In this example, the VIOS is eliminated and the hypervisor is not required for the fabric 125. I/0 path length and latency are similar to a direct connection to a storage device. Unfortunately, such storage devices are expensive. They also require the user to maintain and manage the LUN Masking and Mapping on the storage device 121.
A novel method and apparatus is needed to perform LUN Masking and Mapping within a PCIe SR-IOV enabled SAS adapter without the need for a VIOS, a hypervisor, or a storage area network appliance.