The introduction of faster microprocessors and digital signal processors (DSPs), often in multiprocessor systems, has increased the importance of on-chip memories with high hit ratios.
A possible solution implemented in various DSPs and real-time processors is the use of direct memory access (DMA) to load an internal random access memory (RAM), in parallel to current program execution, with program sections to be used in the near future. These schemes could yield 100% hit ratio with deterministic performance. But to use the DMA effectively the programmer (or compiler) should keep track of the physical addresses at all times--a cumbersome task.
A cache can yield similar results transparently. But since caches are based on the statistical characteristics of the code, they can not guarantee deterministic access time, which is a major requirement for time-critical routines in real-time systems. Providing both a DMA and a cache on the same die can solve this contradiction, but would result in inefficient use of silicon area.