In modern electronic systems, high-speed Dynamic Random Access Memory (DRAM) devices are key components in just about every design. The DRAM devices are simple in structure but require a more complicated interface than other memory technologies like Static Random Access Memories (SRAM's). Currently, the DRAM devices require several advanced features that are required to achieve the performance levels required by modern processing devices. This requires additional circuitry on the Memory Controller and makes it more difficult for a designer to create a controller from scratch.
Often, Memory Controller implementations have attempted to improve DRAM access efficiency by taking a stream of access requests and optimizing these requests to make sure that no access requests to memory are wasted. Currently, the memory controller implements only a few optimizations due to limited length of the access stream. This limitation is due to the use of a traditional state machine to make access priority decisions since a traditional state machine becomes unmanageably complex as multiple accesses are added to the decision mechanism. If a larger section of the access stream could be used for optimization then access efficiency would go up substantially and the system bandwidth would increase, cost would go down and power consumption would be reduced.
Additionally, previous implementation of memory controller has required most operations to go through the scheduling and re-ordering paths inside the controller. This results in a higher minimum latency through the memory controller.
In light of the foregoing discussion there is a need for an efficient Dynamic Random Access Memory (DRAM) controller.