This invention relates to integrated circuits with memory arrays and, in particular, to circuits for decreasing the time interval between the request from a microprocessor for access to data stored in such memory arrays and the subsequent transmission of such data to the microprocessor.
Improvements in microprocessor speeds of performance have been outpacing corresponding improvements in access time for high-density, non-volatile, semiconductor memories, such as EPROMs. The disparity in required-versus-available memory-array access time has grown larger as a result of recently developed digital-signal-processor (DSP) and reduced-instruction-set-computer (RISC) microprocessor architectures. To utilize a microprocessor's performance capability fully, system designers have resorted to complex architectures such as memory interleaving and high speed static-random-access-memory (SRAM) caches. The alternative has been to compromise system performance by slowing microprocessor access to memory arrays through use of wait states, which previously have been required for every access. Accordingly, there is a need for an improved integrated memory array configuration that minimizes total access time for use in microprocessor system applications without resort to complex circuit architectures.
Studies have shown that microprocessor code typically exhibits a high degree of both linearity and locality. Many microprocessor architectures linearize memory access requests because of on-chip cache burst fill modes or because of instruction pre-fetch queues. When using those microprocessor modes or architectures, a large percentage of total access time involves accessing relatively small part of the data stored in a memory array during a relatively large percentage of the address sequences.