Performance of computing apparatuses may be increased by increasing the operating frequencies and by increasing the number of components, such as transistors, in circuits of the apparatuses. To keep the circuit sizes manageable, designers have reduced or scaled down the size of the circuit components so that larger numbers of devices fit within smaller per unit areas. Today it is not uncommon to find advanced computer system chips that contain millions, even billions, of transistors. This increased density, however, has created numerous problems. One problem is power consumption. Since each electronic circuit component consumes a minute amount of power while operating, circuits with increased numbers of such circuit components generally consume larger quantities of power. Consequently, designers are continually looking for ways to reduce power consumption. Reducing power consumption may provide several benefits. For example, battery lives of mobile devices will generally last longer. When many computing devices are amalgamated in close proximity, such as with large-scale server systems, reducing power consumption may significantly reduce electricity costs, reduce heat generation, reduce cooling costs associated with removing the heat generation, and even extend the life cycles of the computing apparatuses.
Modern computing apparatuses reduce power consumption by clock gating unused structures, especially in processors. While such clock gating techniques may reduce dynamic power consumption, static power consumption remains an issue. In analyzing the problem of static power, one may note that processors may handle workloads in bursts. High performance may be needed for only short periods of time. In the remaining amount of time processors may be idle and only periodically perform limited tasks such as checking for new work, or maintaining network connections. To achieve low energy consumption during idle times, modern processors may power down (sleep) unused cores. In other words, modern processors may switch one or more cores from a high-power mode to a low-power mode. When powering circuits down or up, within a computing apparatus, such as a microprocessor, processing state information may be saved or restored to or from a non-volatile or volatile storage area.
As another power reduction technique, processors may turn on more sections or structures in various divisions of sleep levels. Consequently, more state information may be saved and restored. Unfortunately, unpredictable workloads that come in bursts may cause processors to spend much of their time repeatedly saving and restoring processor and operating system (OS) state upon entering and exiting sleep state, and repeatedly saving and restoring process state for other types of standby operations, such as application context switches. While the power usage in low power states can be relatively low, computing apparatuses employing such techniques may nonetheless still consume significant amounts of energy entering and exiting the low-power states because of the repeated saving and restoring of processor state. For example, some state save and restore mechanisms provided by some processors assume the worst case of processor state usage and save and restore relatively large amounts of state, such as 8 kilobyte (KB) of state per activation, regardless how much state remains intact from the last standby activation. Additionally, many standby operations of many processors require exclusive OS support for activation.