Flash Management Operations
Non-volatile memory is a type of memory that can retain its stored data without a power source. There are several types of non-volatile memories, with different read, write and erase capabilities, access times, data retention, data endurance cycles, etc. Electrically Erasable Programmable Read Only Memory (EEPROM) is capable of performing read write operations on a per-byte level, which means that each of the memory locations can be individually read and written.
Flash memory, comprised of flash-type floating-gate transistors, or cells, is a non-volatile memory similar in functionality and performance to EEPROM memory; flash memory has the advantage of being relatively inexpensive, although it operates under certain limitations. It is not possible to rewrite to a previously written location on flash memory without first erasing an entire memory section, i.e., the flash cells must be erased (e.g. programmed to “one”) before they can be programmed again. Flash memory can only erase relatively large groups of cells, usually called erase blocks (for example, 16 KB to 256 KB in size for many current commercial devices). Therefore updating the contents of a single byte or even a chunk of 1 KB requires “housekeeping’ operations—data stored in the erase blocks that are not updated must first be moved elsewhere (i.e. relocated) so that they will be preserved during the erase operation, and then moved back into place after the update.
Sometimes, some of the blocks of the flash memory device, called “bad blocks”, are not reliable and their use should be avoided. Blocks are declared as “bad blocks” either by the manufacturer, when initially testing the device, or by application software, when detecting the failure of the blocks during operation of the memory device.
To overcome these limitations, a Flash File System (FFS) is implemented, as disclosed in U.S. Pat. No. 5,404,485 to Ban, incorporated by reference as if fully set forth herein. FFS provides a system of data storage and manipulation on flash devices that allows these devices to emulate magnetic disks. In the existing art, applications or operating systems interact with the flash storage subsystem using virtual addresses, rather than using physical addresses. There is an intermediary layer between the software application and the physical device that provides a mapping (also referred to herein as a “translation”) from the virtual addresses into the physical addresses. While the application or operating system software may view the storage system as having a contiguous defect-free medium that can be read or written randomly with no limitations, the physical addressing scheme has “holes” in its address range (due to bad blocks, for example), and pieces of data that are adjacent to each other in the virtual address range might be greatly separated in the physical address range. The intermediary layer that does the mapping described above may, for example, be implemented in part or in whole by a software driver running on the same CPU on which the applications run. Alternatively or additionally, the intermediary layer may be in part or in whole embedded within a controller that controls the flash device and serves as the interface for the main CPU of the host computer when the host computer accesses the memory device (also called “storage device”). This is, for example, the situation in removable memory cards such as SecureDigital (SD) cards or MultimediaMemoryCards (MMCs), where the card has an on-board controller running a firmware program that, among other functions, implements the type of mapping described above.
Software or firmware implementations that perform such address mappings are usually called “flash management systems” or “flash file systems”. The term “flash file system” actually is a misnomer, as the implementations do not necessarily support “files” in the sense that files are used in operating systems or personal computers, but rather support block device interfaces similar to those exported by hard disk software drivers. Still, the term is commonly used, and “flash file system” and “flash management system” are used herein interchangeably.
Other systems that implement virtual-to-physical address mapping are described in U.S. Pat. No. 5,937,425 to Ban and U.S. Pat. No. 6,591,330 to Lasser. Both of these patents are incorporated by reference for all purposes as if fully set forth herein.
For the present disclosure, a “flash management layer” is a computer element (i.e. implemented in hardware, software, firmware or any combination thereof—residing on any number of devices) that (i) receives requests to write data to flash or to read data from flash (for example, at a specified block); (ii) handles the requests by programming the flash or reading data from flash; and (iii) effects one or more auxiliary flash management operations (for example, housekeeping operations, address mapping, bad block management, wear-leveling, management of storage of “mapping tables” or other flash management tables, and error correction). Flash management layers are useful for “hiding” from a client at least some of the complexity of using flash memory—for example, the need to carry out housekeeping operations when writing to flash, the need to effect bad block management, the fact that data read from flash cells is sometimes unreliable and there is a need to effect some sort of error correction, the need to extend the life of the flash memory cells by wear leveling, and for the case of NAND—the need to operate under the constraint that NAND is not random access but serial access).
Examples of policies for storage of flash management tables are discussed in US 20060253645 incorporated by reference for all purposes as if fully set forth herein.
Examples of wear leveling techniques for flash EEPROM systems are discussed in U.S. Pat. No. 6,850,443 incorporated by reference for all purposes as if fully set forth herein.
Examples of techniques of managing programming voltage parameters for writing to flash memory or “programming flash memory cells” is provided by U.S. Pat. No. 6,903,972 and US 2005/0024978, incorporated by reference for all purposes as if fully set forth herein.
Some Exemplary Flash Storage Systems
FIG. 1A is a block diagram of an exemplary flash storage system 100A that includes both a file-system 130 layer, and a flash management layer 140 for directly reading from and/or writing to flash memory cells 150. “Client application” 106A (for example, an application executing on a personal computer (e.g. MS-Word® or on a cell-phone application (e.g. an address book application)) sends to the file-system 130 commands to store data to persistent memory and to read data from persistent memory according to a standard file-system syntax (“file open,” “file read,” “filed write,” etc). These file system commands are received by file system 130 via “file system interface” 30 (i.e. a logical interface through which file system 130 receives commands according to file system syntax).
Upon receiving each aforementioned data storage request or data retrieval request via file system interface 30, file system 130 invokes data storage or data retrieval services provided by flash management layer 140 which “hides the complexity” of flash storage by presenting to the file system a block-oriented or page-oriented interface (for example, a Logical Block Addressing (LBA) interface) or other high-level interface such as interface 40. Thus, in the exemplary configuration shown in FIG. 1A, there is no need for file system 130, or for client application 106A, to be “aware” of the specifics of flash management.
In recent years, so-called “object-oriented” storage systems have become more popular. FIGS. 1B and 1C are block diagrams of exemplary object-oriented systems (shown at 100B and 100C, respectively) in which data is stored to, and/or retrieved from, flash. In the example of FIG. 1B, the client application 106B issues a request to store a data object or retrieve a data object using an object-oriented protocol such as Media Transfer Protocol (MTP) or Object-based Storage Device (OSD) protocol/command via object-oriented storage interface 20. Object mode-logical sector mode translation layer 124 (i.e. part of an ‘object-oriented storage system’) receives the request, and sends to flash management layer 140 corresponding read or write requests via the block-oriented interface (for example, an LBA interface) via flash management layer interface 40.
The example of FIG. 1C relates to a storage system that supports more “modern” object oriented storage protocols together with “legacy” file system architecture. Thus, in the example of FIG. 1C, the client application 106B issues a request to store a data object or retrieve a data object using an object-oriented protocol such as MTP or OSD via object-oriented storage interface 20. Object mode-file mode translation layer 126 receives the request, and sends, via file system interface 30, corresponding file-system read or write requests to the “legacy” file system 130.