The disclosure relates generally to computer systems that implement Synchronous input (I/O) commands, and more specifically, to computer systems that maintains a CRC context in an I/O endpoint device cache.
In general, the technical field discussed herein includes communications between servers and storage control units over a storage area network involving multiple switches and multiple layers of a protocol stack. Contemporary implementations of these communications between servers and storage control units include asynchronous access operations by operating systems within the storage area network. Asynchronous access operations require queues and schedulers for initiating the requests, along with interruptions for any associated context switch for processing a completion status. These queues, schedulers, and interruptions amount to asynchronous overhead that adds significant latency and processing delays across the storage area network.
Storage Area Networks (SANs), as described by the Storage Networking Industry Association (SNIA), are high-performance networks that enable storage devices and computer systems to communicate with each other. In large enterprises, multiple computer systems or servers have access to multiple storage control units within the SAN. Typical connections between the servers and control units use technologies such as Ethernet or Fibre-Channel, with the associated switches, I/O adapters, device drivers and multiple layers of a protocol stack. Fibre-channel, for example, as defined by the INCITS T11 Committee, defines physical and link layers FC0, FC1, FC2 and FC-4 transport layers such as the Fibre Channel Protocol (FCP) for SCSI and FC-SB-3 for Fibre Connectivity (FICON). There are many examples of Synchronous and asynchronous I/O access methods, each with their own advantages and disadvantages. Synchronous I/O causes a software thread to be blocked while waiting for the I/O to complete but avoids context switches and interrupts.
For Synchronous I/O, the host bridge needs to maintain a cyclic redundancy check (CRC) context for each entry working on a Synchronous I/O CRC operation in its Device Table Cache. In a CRC context, a corresponding DTE entry is maintained (“pinned”). This context is owned by the host bridge hardware and must not be cleared in case of cache entries being evicted. For regular dynamic memory accesses (DMAs), eviction is made technically simple by deleting a cache entry. In case of an update to the error status triggered by a Host Bridge, the update is sent with an atomic operation into the device table in memory. A CRC update upon eviction increases hardware complexity by requiring an additional data path into the mainline DMA write path. An update of the CRC increases system load and system latency by leading to added atomic operations.