When the central processing unit (CPU) of a data processing system performs input-output (I/O) or memory access operations, the CPU will issue a sequence of signals specifying the location of information to be brought in to the CPU or information to be sent out by the CPU for storage. Such operations often involve placing address information on a synchronous bus to establish communication with a device or memory unit on an asynchronous bus that is the source or destination for the information transfer, issuing the command defining the operation to be performed and then passing the actual data to be transferred. The details of these steps, including the control lines that must be asserted, the timing for placing address or data signals on the buses that carry them and the timing for the control signals are specified by various bus protocols and/or by the requirements of the input-output (I/O) or memory device in question. In most protocols there are required time intervals for holding certain signals, including particularly set-up intervals.
The set-up intervals associated with commands for data transfers to the CPU define the time periods during which address lines must be held stable before the READ command signal is provided. For data transfers from the CPU, there is not only a period for address set-up but also another required time interval after issuance of the WRITE command during which the data signals must be held stable.
With each input-output (I/O) or memory access operation, the full sequence of steps specified by the protocol is normally executed. This sequence involves the above-discussed, required set-up times. The failure to provide the setup times can lead to writing to an incorrect address or loss of data. For example, if the asynchronous device is a static RAM, a WRITE to the wrong location could occur if the address for writing were not held stable through the cycle or if the set-up or hold times were violated relative to the WRITE signal pulse.
In many transfers of information to or from a CPU, a principle of locality operates. This means that one data transfer to or from a particular location or device is likely to be followed by a data transfer to or from the same or a similar location or device, having the same address. Thus, in following the same data transfer protocol even when the address is the same, the data transfer control circuitry will be repeating at least some of the same sets of signals and the same setup periods. This repetition slows communications to and from the CPU.
It would be desirable to have an apparatus or method that would reduce the time overhead of data transfers to or from the same addressed device on an asynchronous bus.