Non-volatile memory storage devices, such as solid state drives (SSDs), use solid state memory to store data. These devices have, in recent years, presented an alternative to conventional hard disk drives (HDDs) that have slower access times. In addition, non-volatile memory storage devices also offer substantially lower power consumption and failure rates than HDDs, making these devices particularly useful for the implementation of modern enterprise storage solutions.
Non-volatile memory storage devices are not without their drawbacks, however. That is, in comparison to volatile memories, such as SRAM or DRAM, memory operations, particularly write operations, may impose significant latency. As a result, a queue may be required to store write commands and/or data during operation until non-volatile memory commands can execute each command serially. One known implementation for this queue involves using a volatile memory buffer cache such that write data are written first to the volatile memory buffer cache and subsequently to solid state memory of the device when the solid state memory is available. In some instances, limited volatile memory cache may exist within the non-volatile memory chip and may be used for this purpose.
By using this approach, data intended to be stored in a solid state device may be lost if the device loses power while data is being written to the volatile memory. To prevent this, typical implementations have used capacitors, and in particular supercapacitors, to provide power in the event of power failures in the device. Power may be provided, for instance, until data has been written from the buffer cache to nonvolatile memory. Using capacitors in this manner is an expensive and often complex solution, however.