Computer systems operate by executing instruction sequences that form a computer program. These instructions sequences are stored in a memory subsystem, along with any data operated on by the instructions, both of which are retrieved as necessary by a processor, such as a central processing unit. The speed of CPUs have increased at a much faster rate compared to the memory subsystems upon which they rely for data and instruction code, and as such, memory subsystems can be a significant performance bottleneck. While one solution to this bottleneck would be to primarily use in a computer system only very fast memory, such as static random-access memory, the cost of such memory would be prohibitive. In order to balance cost with system performance, memory subsystem architecture is typically organized in a hierarchical structure, with faster expensive memory operating near the processor at the top, slower less expensive memory operating as storage memory at the bottom, and memory having an intermediate speed and cost, operating in the middle of the memory hierarchy.
Further techniques can be implemented in order to further improve the efficiency of a memory hierarchy. For example, cache buffering of data between memory levels can reduce the frequency that lower speed memory is accessed. In another example, parallel access channels can be used, both within and in between memory levels, to perform data operations in parallel.