As data processing systems operate at faster speeds, associated peripheral memory devices must be able to function at compatible frequencies. However, as semiconductor technologies improve performance, the frequency at which a data processing system operates has increased to equal or even surpass the operating frequency of peripheral memory devices. In the latter case, the data processing system must often wait several clock cycles for information to be received from peripheral memory devices. Consequently, several techniques have been introduced to alleviate or shorten the latency between the performance of peripheral memory devices and the performance of the data processing system.
In one technique, a fast memory device called a "cache" is placed between the data processing system and a peripheral memory device. In this example, a peripheral memory device typically stores the bulk of information needed or provided by the data processing system. However, the peripheral memory device is not able to provide information in a single clock cycle and the data processing system must wait for several clock cycles before beginning to process another instruction. In comparison, the fast memory device provides information very quickly. Therefore, if the fast memory device is used to store the information values which are most often accessed by the data processing system, the period of time which the data processing system waits to receive information is generally shortened. By using this technique, the bulk of the information is still stored in the peripheral devices, but the information most frequently used is stored in the fast memory device.
The fast memory device may be integrated within the structure of the data processing system or implemented externally between the data processing system and the peripheral memory device. In either case, the fast memory device is an expensive solution. If the fast memory device is integrated within the structure of the data processing system as a portion of the semiconductor device, the fast memory device consumes a substantial amount of circuit area. Rather than providing other circuitry to further enhance the functionality of the data processing system, a fast memory device must be integrated in the data processing system to maintain the highest operating frequency. If one or more fast memory devices are implemented externally to the data processing system, the additional external fast memory devices result in a higher system overhead cost.
In a second technique, a memory subsystem compensates for the difference in the operation frequencies of the memory subsystem and the data processing system by allowing multiple concurrent accesses of different addresses. The multiple concurrent accesses are accomplished by providing a plurality of memory banks wherein each of the memory banks is independently and distinctly addressed and controlled. When the addresses of the memory banks are arranged such that the consecutive addresses are provided by n different memory banks, where n is an integer, the memory subsystem is n-way interleaved.
When the data processing system accesses the peripheral memory devices in an interleaved manner, a first address of a first memory bank is accessed and then a first address of a second memory bank is concurrently accessed. Similarly, a plurality of other memory banks may be accessed while the first and the second memory banks continue to process a respective memory access. During an interleaved memory access, the data processing system may access any predetermined number of addresses concurrently.
When the data processing system provides an address to access one of a plurality of contiguous information values, the address is decoded and indicates which one of the plurality of memory banks contains the information value. To access the memory banks in an interleaved manner, addresses must be decoded such that the plurality of contiguous information values are contained in different memory banks and, therefore, may be accessed concurrently.
In a standard memory device, an access time is defined as the time from the start of execution of an operation to the end of operation execution. For example, in a read operation in the standard memory device, the access time is defined as the time from the start of execution of the read operation until the data read during the read operation is ready for use in a subsequent operation. The time from the start of execution of an operation until the device may execute another operation is referred to as the "cycle time."
In an interleaved memory access, the cycle time necessary to execute a first access of the first memory device is dependent on the cycle time of the first memory device. However, the time necessary to begin execution of subsequent operations is shortened, since the subsequent operations are executed concurrently with the first memory access. Although the cycle time of each of the peripheral memory devices remains the same, the data processing system is able to overlap the accesses of each of the peripheral memory devices and, therefore, increases the number of operations executed in a given amount of time.
Although interleaved addressing allows the data processing system to concurrently access peripheral memory devices, the overhead cost is expensive. For example, a predetermined number of external peripheral memory devices is necessary to implement interleaved addressing and, therefore, results in higher system overhead costs.
Both the fast memory device implementation and the interleaved addressing method result in an increased system overhead cost. Additionally, if the fast memory device is integrated within the structure of the data processing system, the designer of the data processing system must compromise between system functionality and system cost.