Present day data processing systems which use a microprocessor and which are optimized for high speed operation often require a "ache". A cache is a block of memory that can be accessed very quickly by the microprocessor. Usually, a cache can be accessed more quickly than the main or system memory used in the data processing system. Because microprocessor systems make frequent accesses to memory, making accesses to cache instead of to system memory results in a significant savings of time.
Caches are generally used to store frequently used or recently used values, such as addresses, data, or instructions. A goal is to minimize the memory accesses that must use the slower system memory by maximizing the memory accesses that are able to use the cache instead. Because a large number of accesses are made to cache, it is important that the amount of time that is required to access the cache be reduced as much as possible in order to speed up the data processing system. Generally, the shorter the cache access time, the faster the execution speed of the data processing system. Therefore, reducing the time required to access a cache is an important goal of many high speed data processing systems.
Many microprocessor systems also utilize a cache controller as well as a cache. A cache controller is a device that coordinates each access to the cache. Therefore part of the time required for each cache access is the time required for the cache controller to perform its function. Cache controllers also contain memory within their internal circuitry. The faster it is to access the memory within the cache controller, the faster the cache controller itself can operate and the faster the data processing system can execute instructions.
Each memory cell within a memory, including a cache controller memory, is capable of storing a digital value representing either a logical state "0" or a logical state "1". Memory cells are then combined to form memory entries. A memory entry is made up of one or more memory cells and corresponds to the width of the memory. An "8K by 8" memory has 8K total memory entries and each memory entry consists of eight memory cells or bits. An "8K by 1" memory still has 8K total memory entries, but each memory entry consists of only one memory cell or bit. Note that "K" is equal to 1024.
Prior art memories, including cache controller memories, use a memory cell array arranged in a grid array of rows and columns. The width of each column is the same as the width of the memory's data entries, and can be one or more bits wide. For example, a "4K by 4" memory has memory entries that are four bits wide, that is, each memory entry contains four memory cells. Thus, each column in a "4K by 4" memory is four bits wide. And similarly, a "4K by 1" memory has memory entries that are only one bit wide, that is, each memory entry contains only one memory cell. Thus, each column in a "4K by 1" memory is one bit wide. Additionally, the width of the memory's data path is often the same as the width of the memory's data entries.
A standard Static Random Access Memory (SRAM) cell has two bit lines which are used to transfer data in and out of the memory cell. In prior art memories, only one memory cell at a time transferred its data contents onto the bit lines. Thus in prior art memories, only one memory entry at a time was accessed, and this access was accomplished by selecting one row and one column. The one memory entry that was located in both the selected row and the selected column was used to drive one pair of bit lines for each memory cell in the memory entry. Because each memory cell in the selected memory entry was coupled to a different pair of bit lines, the bit lines were only ever driven by one selected memory cell at a time. The faster each selected memory cell drives its pair of bit lines to the required voltage, the faster the speed of the memory.
In order to increase the speed of the memory by driving the bit lines more quickly, prior art memories increased the size of the devices within each memory cell that were used to drive the bit lines. But unfortunately, increasing the size of the drive devices increased the size of the memory cells and, thus, the amount of semiconductor area required to build each cell. Thus, prior art memories were faced with a direct trade-off between the size of each memory cell and the access speed of the memory.