One common technique for optimizing power consumption for a processor is dynamic voltage and frequency scaling (DVFS). In DVFS, the voltage and operating frequency (clocking) of the processor are varied depending upon the workload. If the processor has a light workload, it may operate at a lower voltage and frequency to save power. Conversely, if the processor workload becomes demanding, the voltage and frequency are increased accordingly. The DVFS control is typically implemented through software running on the processor itself. The DVFS software monitors the processor workload and chooses a suitable performance level (voltage and frequency setting) for the processor based on the processor workload. The performance level is often selected from a set of predetermined performance levels for the processor.
However, this type of software-based power optimization control can place additional computational stress on the processor to run the power optimization control itself. This added computation burden slows the DVFS implementation and leads to non-optimal voltage and frequency selections. Moreover, software-based power optimization control is architecture dependent. It can therefore be difficult to provide generic power optimization control that can be used for various processors implemented using various architectures.
There is thus a need in the art for a generalized power optimization strategy for different processor architectures with reduced computational overhead on the processor.