1. Field of the Invention
The invention relates to memory systems and, more particularly, to managing concurrent access to memory systems.
2. Description of the Related Art
In order to assure the coherency of data stored in a mass storage device as well as to avoid conflicts, current memory controllers are only allowed to perform one memory access operation at a given time on a given memory device. In particular, different type memory devices attempt to avoid conflicts in different ways. For example, a NAND device, the host device is the only agent with the ability to stop a currently executing transaction and to initiate the execution of another transaction. However, with SD/MMC devices, the host device sends a request and software internal to the SD/MMC handles the request from the host device whether or not the current process is to be halted while the requested process is be executed. In this situation, the internal software is provided a relatively long time out in order to perform other functions (such a maintenance operations) as well as arbitrate process execution. Unfortunately, these long time outs do not meet the fast responses required for some use cases, such as demand paging. Demand paging refers to the situation that stems from the fact that in order to preserve memory resources, pages of data are only copied from data storage to RAM as they are needed by the processor executing an application.
In some situations, it may be very disadvantageous to completely preclude any other access to the memory device in those situations where there is an ongoing READ or WRITE operation. One example of such a situation where read latency is critical is demand paging since demand paging requires that read and write processes have continuous access to data stored in memory. In NAND flash memory systems, demand paging is commonly used, however, since read and write operations cannot be performed simultaneously, each page retrieval operation will block the whole system until the page is fully loaded thereby greatly slowing down application execution.
There have been a number of attempts at resolving this problem. For example, some systems rely on queuing up the demand paging request until an ongoing read or write operation is complete. This approach, however, is not compatible with the immediate response required to service the demand paging request. For the protocols/buses that support ‘stop’ of current operations (in event a request of higher priority has been identified), issuing the stop command to halt execution of a given process and only then servicing the (urgent) demand page request can take a long time. Again the urgency of the demand page request is compromised as is the overall performance of the system. Another well known approach relies upon intelligent queuing based on recognition of the priority (‘prioritized queue’). However, these systems do not address the issue of treating an urgent request that was not anticipated, i.e., one that requires a guaranteed real-time response and received in the midst of another operation of lower priority.
This problem is increasingly acute in the context of storage device concurrently supporting different functionality/logical protocols with each requiring different system level priority. For example, embedded storage devices (embedded SD) have legacy mass storage commands that coexist with OS code image demand-paging fetches that are conveyed over the same physical storage bus. Other examples include legacy mass storage commands (e.g. SD read/write commands) coexisting with TCP/IP interactions that are conveyed over the same physical storage bus as well as any combination of legacy mass storage devices coexisting with both demand paging and TCP/IP interactions.
Therefore improving the management of concurrent operations with various priority levels in a data storage device is highly desirable.