In a prior microprocessor based system having memory, a microprocessor, as well as other circuitry, it is necessary to have clock or timing signals for various uses. For instance, when a microprocessor accesses a DRAM (i.e., dynamic random access memory) in the system, many clock signals are required from the microprocessor to latch addresses, decode the addresses, access the memory array, precharge nodes, control refreshing, etc.
The advances in the microprocessor technologies have led to the creation of high speed and high performance microprocessors. However, interfacing such a high speed, high performance microprocessor to a DRAM array requires the microprocessor to analyze many timings, to examine refresh cycle effects on bus timing, and to note minimum and maximum signal widths, which adversely affects the speed and performance of the microprocessor.
One prior solution to these problems is to design a DRAM controller that interfaces the microprocessor and the DRAM device. A prior DRAM controller typically provides complete control and timing for the DRAM device. The microprocessor interfaces to the DRAM controller. In such a prior DRAM controller, many techniques have been employed to generate the required timing signals internally.
Generally in the prior art, an internally generated system clock signal in the microprocessor is delayed to provide timing signals to access the DRAM array by using the charge-discharge characteristics of a resistor-capacitor network or of an MOS transistor-capacitor network. The length of the delay is controlled in these cases by the amount of resistance, capacitance or by the characteristics of an MOS transistor. The timing signals are then used to access the DRAM array.
One prior problem associated with this technique is that it does not provide accurate timing signals. The large variations, for example, in MOS circuit characteristics due to typical wafer processing, supply voltage variations and operating temperature cause substantial variations in timing delays.
One prior solution to solving this timing problem is to generate the timing signals in the DRAM controller by the use of a synchronous delay line. The synchronous delay line typically receives a clock signal and provides a series of taps, wherein each tap provides a timing pulse that has a precise delay from the commencement of a clock cycle which is initiated by the clock signal. The clock signal is applied from the microprocessor external to the DRAM controller and coupled to the synchronous delay line. The delay line is operating synchronously with the external microprocessor. The synchronous delay line then generates timing designed to have precise delays from the start of the clock signal. In addition, the timing signals are insensitive to variations from wafer processing, supply voltage and temperature.
Prior synchronous delay lines are described in (1) U.S. Pat. No. 4,496,861, issued on Jan. 29, 1985, entitled "INTEGRATED CIRCUIT SYNCHRONOUS DELAY LINE", in (2) U.S. Pat. No. 4,975,605, issued on Dec. 4, 1990, entitled "SYNCHRONOUS DELAY LINE WITH AUTOMATIC RESET", and in (3) U.S. Pat. No. 4,994,695, issued on Feb. 19, 1991, entitled "SYNCHRONOUS DELAY LINE WITH QUADRATURE CLOCK PHASES".
One disadvantage of the use of the synchronous delay lines in the DRAM controller is that the synchronous delay lines are designed to operate synchronously with the microprocessor's clock signal at a particular frequency. Because the synchronous delay lines in the DRAM controller function under the particular frequency of the clock signal, the delay lines depend on the type and speed of the external microprocessor. When the microprocessor is replaced with a new type of microprocessor having a higher frequency clock signal, the synchronous delay lines in the DRAM controller cannot generate the timing signals required for the DRAM under the new clock signal. Thus, the DRAM controller must also be replaced.
Another disadvantage of the use of the synchronous delay lines in the DRAM controller is that all input signals to the delay lines have to be delayed in order for the signals to be synchronized with the clock signal at the delay lines. This is typically done by having the controller wait for at least a couple of clock pulse periods in order to assure that an input signal is received and synchronized at the delay lines. In this case, the controller does not respond to the input signals immediately and much time is wasted in synchronizing the input signals.
Another disadvantage associated with the use of the synchronous delay line in the DRAM controller is that it is difficult, though not impossible, to transfer high frequency clock signal accurately and free of noise within the system. When the frequency of the clock signal of the microprocessor becomes higher and higher, more chip space is needed to ensure an accurate transfer of the clock signal to other devices within the system. Therefore, it is desirable to distribute the high frequency clock signal to as few places in the system as possible.
Another disadvantage associated with the use of the synchronous delay line in the DRAM controller is that the interface logic between the controller and other system devices becomes more complicated when the controller is a dual ported controller. A dual ported DRAM controller is connected both to the microprocessor in the system via a host bus and to other devices, such as system bus controller, peripheral controller, via a system bus. These other devices typically have access to the DRAM array through the controller directly without the involvement of the microprocessor. In this case, the controller is operating synchronously with the microprocessor under a higher frequency clock signal while these other devices in the system are operating under their clock signals of much lower frequencies. The clock signal of the microprocessor and that of the system devices are not synchronized to each other. Because the controller is operating synchronously with the microprocessor's clock signal, the communication between the controller and the other system devices typically requires a handshake operation. Therefore, additional logic is required to accomplish the operation.