The personal computer and server industries generally require a 20-40% yearly performance gain across many workload types in order to be competitive. Various mechanisms are used to provide these levels of performance gains including, for example, core count increases and memory size/bandwidth/latency improvements. Memory improvements typically take the form of faster dynamic random access memory (DRAM), higher DDR (Double Data Rate) bus frequencies, larger capacity dual inline memory modules (DIMMs), more DIMMs per channel and other optimizations. Similarly, multi-socket system performance improvements require faster and better interconnects between the processors.
Higher DDR speed and DIMM counts require that DRAM channels be carefully tuned for optimum signal quality and bus timing. This tuning is performed by basic input/output system (BIOS) during boot up and is commonly referred to as “DDR training” DDR training includes many time consuming steps, for example, centering of various strobe signals, cross talk elimination and reference voltage calibration. These calibration steps are used to derive optimal DDR timing parameters that are applied to the DRAM controller and DIMMs. This programming is done before memory is accessed as these parameters cannot be updated during operation without disturbing memory traffic.
Non-optimum parameters result in higher bit error rates and generally destabilize system operation. These complex calibration steps result in increased boot time. Current DDR-4 proposals call for per DDR device calibration across multiple parameters to achieve higher speeds and lower voltages. As a result, memory training processes in these platforms may be increased.