As central processing units (CPUs) continue to get faster, the memory units that supply the data to the CPUs must continually get faster as well. In a typical computer system, a variety of different memory devices are employed to meet the needs of a particular application, wherein each memory device provides a trade-off in storage capacity, cost and response time. System performance is maximized by utilizing the devices in a hierarchy arrangement, utilizing both extremely fast, but low-capacity memory devices in combination with slower, higher capacity memory devices. The memory hierarchy would include both on-chip memory devices (e.g., processor registers, caches, etc.) as well as off-chip memory devices (e.g., main memory devices and disk storage). For example, a computer system may employ a hard disk drive (HDD) as the disk storage device and a dynamic random access memory (DRAM) as the main memory. The hard disk drive provides cheaper storage (i.e., cost/GB), and higher capacity, but slower response time. In contrast, the DRAM device provides faster response time, but at higher cost and lower capacity.
In recent years, non-volatile memory (NVM) devices in the form of solid-state drives have been employed as a complementary type of disk storage, used either instead of or in conjunction with a HDD. The NVM devices provide faster response time than a typical HDD, but at a slightly higher cost per gigabyte (GB). Both are located “off-board”, and therefore communicate with the CPU or host system via a data bus. As such, HDD and NVM devices are often referred to as an “Input/Output (I/O) Memory Tier”, because they require input/output operations to communicate with the CPU (referred to herein as the host system).
The host system communicates with the NVM device via the data bus according to an interface protocol. For example, peripheral component interconnect express (PCIe) data buses have gained popularity in recent years. Interface protocols, such as non-volatile memory express (NVMe) and SCSI over PCIe (SOP) have been created to provide a common interface for these devices to use to enable communication.
The SOP interface standard being developed provides for the creation in the host system of an inbound command queue and an outbound command queue. For example, if the host system would like to write data to the NVM device, a write command is placed in the inbound command queue where it is retrieved by the NVM device. In addition, the host system creates a data buffer where application data to be written is stored as well as a protection or metadata buffer that stores information to be appended to the application data. For some NVM devices and/or modes, application data and protection data is interleaved when received by the device. For these devices, the host system may create a third buffer where data is interleaved before being retrieved by the NVM device in response to the write command. However, use of a third buffer to store data already stored to first and second buffers is duplicitous and therefore not cost effective.
In other embodiments, to avoid the cost associated with a third buffer, scatter/gather list (SGL) descriptors—created by the host system and utilized by the NVM device to determine the location of application data and protection data to be retrieved—are utilized to provide the desired interleaving of data from different buffers. This requires that for every block of application data and corresponding block of protection data, a separate pair of SGL descriptors must be created. For a message that includes a number of data blocks, the overhead to create and store the required SGL descriptors becomes prohibitive.
It would therefore be desirable to provide a more efficient manner of interleaving data within the framework of the communication interface standards developed.