Conventional SANs. In a conventional FC SAN 108 shown in FIG. 1, an Input/Output Controller (IOC) or Host Bus Adapter (HBA) 100 includes a Node Port (N_Port) 102 that is connected to a FC switch or Just a Bunch Of Disks (JBOD) 104 via a FC link 106. During initialization, a known FC initialization sequence initiated by a driver 116 in a host OS 110 of host 112 causes the HBA 100 to send a Fabric Login command (FLOGI) to the switch 104, including a World-Wide Port Name (WWPN) for the N_Port 102. The switch returns a FLOGI response to the N_Port 102, including a FC address (a virtual IDentifier (ID)) associated with the WWPN for the N_Port.
The driver 116 also performs a discovery function in which it communicates with the FC switch 104 via the HBA 100 and FC link 106 and obtains a list of the addresses of all devices in the fabric. The discovery function then goes out to every address, logs into the device associated with that address (at which time a login context is allocated), and determines if the device is a FC/Small Computer System Interface (SCSI) target. If the device is a FC/SCSI target, the discovery function establishes a connection between the target and the HBA 100. In addition, the physical FC link 106 is exported as a SCSI bus 114 to the OS 110, and the remote port associated with the discovered FC/SCSI device thereafter appears as a target on the SCSI bus in typical SCSI fashion.
Conventional FC SANs 108 are limited because only one WWPN and FC address can be assigned to the N_Port 102 on a single FC link 106. In other words, this conventional computing model contemplates a single OS per system, so the OS explicitly owns the FC port. As such, system management tools have been defined (such as zoning and selective storage presentation/Logical Unit Number (LUN) masking) that are based on the FC port.
NPIV. However, FC has extended its feature set to include NPIV, a feature that allows a fabric-attached N_Port to claim multiple FC addresses. Each address appears as a unique entity on the FC fabric. Utilizing NPIV, multiple WWPNs and FC addresses recognizable by the FC switch can be assigned to a single physical FC link and N_Port. By allowing the physical FC port to now appear as multiple entities to the fabric, the conventional computing model can now be extended. A system can now run more than one OS by creating virtual systems (or machines) and running an OS image in each virtual machine. Instead of owning the physical FC port, an OS now uniquely owns one or more FC addresses (and their associated WWPNs) claimed by the FC port. As the relationship of the virtual machine/OS owning the WWPN/FC address remains consistent with the conventional computing model, legacy FC management functions can continue to be used unchanged. As the FC fabric treats each fabric entity as a unique port (including all responses to name server queries, etc), each fabric entity primarily behaves as if it were an independent FC link.
FIG. 2 illustrates a FC SAN 222 implementing NPIV. The first physical FC link 212 is generated as before, wherein an HBA 208 logs into the switch 210 by sending an FLOGI command to the switch, which then returns a FC address associated with the WWPN of the N_Port 220. Multiple additional initialization sequences are then initiated with the switch 210 by the N_Port 220 in the form of Fabric DISCovery requests (FDISCs), which are used to instantiate virtual FC links 216. A unique WWPN is provided with each FDISC to the switch 210, which returns a FDISC response with a unique virtual FC address to the HBA 208, forming a virtual FC link 216. Each virtual FC link 216 looks like a separate physical FC link 212, although all physical and virtual FC links actually share the same physical connection. With the creation of multiple virtual FC links, there appear to be multiple HBAs and ports connected to the switch 210.
FIG. 2 also illustrates a driver 204, which is part of the host OS 202 in a host 200. The driver 204 communicates with the host OS 202 over a SCSI bus 206 and communicates with hardware such as the HBA 208. The driver 204 performs the discovery function described above to establish a SCSI bus 206 associated with the physical FC link 212 and a virtual SCSI bus 218 associated with the virtual FC link 216.
Each instance of a FC link in the fabric, whether physical or virtual, will be generally referred to herein as a “vlink.” In addition, the physical FC link will be referred to as the “physical vlink,” and the virtual FC links will be referred to as “virtual vlinks.” Each vlink has an individual and distinct representation within the FC fabric. Each vlink has its own unique identifiers (e.g. port WWPN/World-Wide Node Name (WWNN)) and its own FC address within the fabric. Each vlink is presented its own view of storage, and thus can potentially enumerate different targets and Logical Unit Numbers (LUNs) (logical storage entities). Each vlink must therefore independently register for state change notifications, and track its login state with remote ports.
NPIV has been fully adopted. See the FC-DA Technical Report (clause 4.13 N_Port_ID Virtualization) and the FC-FS Standard (clause 12.3.2.41 Discover F_Port Service Parameters (FDISC)), the contents of which are incorporated herein by reference. Note that there is no specific mention of NPIV in the FC-FS Standard, but the FDISC description has been modified to allow Address ID assignment per NPIV. See also Fibre Channel Link Services (FC-LS-2), Rev 1.2, Jun. 7, 2005, T11.org/INCITS, which describes the standard for FC link services and provides definitions for tools used for NPIV and describes N_Port requirements for virtual fabric support, and Fibre Channel Direct Attach-2 (FC-DA-2), Rev 1.00, Nov. 18, 2004, T11.org/INCITS, which describes the Standard for FC direct connect link initialization, including use of the NPIV feature, both of which are incorporated by reference herein.
Although NPIV allows the creation of many virtual vlinks along with the actual physical vlink, there is in fact only one HBA that must be shared between the physical and virtual vlinks. The resources of the HBA are finite, and different HBAs may have different levels of resources. The limitation of only one HBA places resource constraints on the system, such as the number of vlinks that may be present at any time. Several key HBA resources will now be discussed.
RPI resource. For every device that is seen on each vlink, an independent resource called a Remote Port Index (RPI) is consumed within the HBA. FIG. 3 is an illustration of the consumption of resources in an NPIV FC SAN 310. In FIG. 3, if one FC device D0 is installed on the physical vlink 312 and two FC devices D1 and D2 are installed on the virtual vlink 314, there will be a total of three RPIs consumed in the HBA 300 (see RPIs 302, 304 and 306). RPIs are context cache memory data structures which in FC may also be referred to as a login context between two ports. RPIs maintain communication state information that has been negotiated between both the local FC port on the HBA and the remote FC port on the installed FC device. Some of the data maintained in an RPI includes the addresses for both the local and remote FC ports, the class of service parameters to be used between the local and remote FC ports, and the maximum frame size. RPIs are data structures, and because there is usually a finite table of RPIs maintained in a finite HBA memory, there are a limited number of RPIs available.
XRI resource. For every SCSI I/O context that is actively occurring between the local and remote FC ports for a particular login context or RPI, another independent resource called an eXchange Resource Index (XRI) is consumed within the HBA 300. XRIs are typically bound to a particular I/O context and to an associated RPI, and store exchange IDs or exchange resources such as sequence counters, data offsets, relationships for Direct Memory Access (DMA) maps, and the like. For example, if host 316 sends a read request to FC device D1 through port P1, an XRI 320 is consumed and bound to RPI 304. As with RPIs, the XRIs are data structures stored in the finite HBA memory, and are limited in number.
To manage these resources properly, there is a need to be able to monitor the finite resources of the HBA, create and delete vlinks, and remove targets and release certain resources known to be unnecessary back into a free pool of resources so that they can be re-used by other entities. In addition, there is a need for a firmware implementation that can manage the vlinks and the finite resources of the HBA.