High-speed processor-based electronic systems have become all-pervasive in computing, communications, and consumer electronic applications to name a few. The pervasiveness of these systems, many of which are based on multi-gigahertz processors, has led in turn to an increased demand for the systems to host a larger number of applications having a higher level of complexity than those applications hosted on electronic systems of previous generations. The transfer of information and signals required among the components of these high-speed systems in support of these applications has led to increasing demands for interfaces to support the efficient high-speed transfer of information. Examples of such interfaces include the interfaces between processors and memory devices of high-speed systems.
One memory type typically used in high-speed processing systems is double-data rate dynamic random access memory (DRAM). The double-data rate DRAM is typically twice as fast as a single data rate DRAM running at the same clock speed because a double-data rate DRAM transfers data on both the rising and falling edge of the clock.
While the use of double-data rate memory systems leads to increases in data transfer speeds, issues arise regarding the timing of the data transfer, particularly where a memory controller receives data sent by a double data rate DRAM attached thereto using a strobe-based method. Using this strobe-based method, a strobe signal (also referred to as the DQS signal) is edge-aligned to and accompanies a data signal (also referred to as the DQ signal) sent by the DRAM. This DQS is used by the controller to capture the data signal sent by the DRAM. The DQS signal and the data are received and the DQS signal is delayed by some fixed amount, usually one-fourth of the memory system clock period. This delayed DQS signal, which is approximately in quadrature with the received data, is then used as a common sample clock for each of the DQ input receivers in typically a byte or 8 bits of data sent in parallel. Due to system offsets and pin-to-pin offsets in the DRAM (commonly referred to on DRAM datasheets as “tDQSQ”), however, one strobe-delay value for the whole byte cannot be the ideal amount of strobe-delay for every pin. Furthermore, while manual adjustment of per-bit offsets can yield higher performing memory systems, requiring manual adjustments of these offsets in a production memory system tends to be expensive.
In some memory systems, calibration is performed by affecting the read and write timing positions of the memory controller based on pattern comparisons. For example, to calibrate the read timing of a system, a DRAM can be instructed to provide a known pattern to the controller. The controller then adjusts its read-clock timing position to determine the pass-fail regions (e.g., when a comparison between the received data and the expected data fails, the controller determines that phase position to be in a fail region). Once the pass-fail regions for the entire data-eye are known, the controller chooses an optimal read-clock position centered within the known passing region. A strobe-delay value can be subsequently determined for this optimal read-clock position.
Timing-calibrated memory systems which eliminate pin-to-pin timing variation can give better performance than strobe-based memory systems which use per-byte strobes, but they are substantially more complex. Consequently, there is a need in high-speed, strobe-based memory systems, for per-pin (data bit) strobe-offset control and timing calibration to minimize DQS-to-DQ timing offsets for each DQ pin individually, yielding more robust, higher-speed systems.