Data networking services and applications enable substantial energy savings in broad sectors of economic activity, for example by replacing the transportation of people and goods with the electronic transfer of data packets. In order to maximize the energy-saving yields of the replacement function offered by network services and applications, packet network themselves must target energy efficiency comprehensively, in every component and in every function. Energy efficiency thus becomes an attribute of foremost importance in the design and qualification of network equipment.
Network nodes forward data packets from one network interface to another, often modifying part of their contents. To minimize energy consumption, energy use in network equipment should be proportional to traffic handling activity. More specifically, energy use in a system should scale with the network traffic load and should become negligible when the system does not process packets.
Rate scaling and sleep-state exploitation are popular methods for reducing energy consumption in network nodes when traffic load is well below link capacity. (For reference, in this disclosure the terms rate scaling and sleep-state exploitation as referred to as examples of rate adaptation.)
With rate scaling, the clock frequency of the data-path device changes over time to track the traffic load, using the generally linear relationship that exists between frequency and power to decrease power consumption levels under low traffic loads. However, since the traffic processing rate also scales with the operating frequency, delay accumulation may be caused by frequency reductions. To control delay accumulation, the operation of rate-scaling systems is typically constrained by bounds on the additional delay that lower processing rates can impose on traffic. Steeper reductions of power consumption levels can be obtained by integration of dynamic voltage and frequency scaling (DVFS) technologies in the rate-scaling system. In a DVFS device, the voltage level of the power supply can decrease as the clock frequency scales down, at least down to the minimum voltage level needed to maintain the electronic circuitry of the device in predictable operation.
With sleep-state exploitation, the network device alternates between a full-capacity state where it operates at maximum clock frequency as long as traffic is available for processing and a sleep state where power consumption is much lower.
While a significant body of work can be found in the literature that defines and studies rate-scaling and sleep-state exploitation schemes and architectures, several issues remain unresolved.
First, a clear framework for the coordination of contiguous rate-adaptation devices is not yet available. For sleep-state exploitation techniques with the architectures that have been proposed so far, the lack of coordination may lead to substantial drops in energy-saving performance, while the introduction of coordination requires the broad consensus of a standard body (see for example the ongoing work within the IEEE 802.3az Energy Efficient Ethernet Task Force) or the full cooperation of large portions of the entire network. Even within a single circuit pack, new components, and therefore new sources of energy consumption, must be added to coordinate the clock frequency of multiple rate-scaling devices.
Second, while the energy-saving performance results presented for rate-adaptation techniques are generally encouraging, they heavily depend on the specific set of operating states that are available in a given device and on the resulting power-rate (PR) function. When different schemes are compared, the final choice for the best solution can only be determined after entering the actual parameters of a practical device. General guidelines for straightforward design practices remain unavailable.
Therefore, it would be desirable to have a scheme for rate adaptation that combine the best properties of sleep-state exploitation and rate scaling while overcoming the limitations of the rate-adaptation techniques that are available in the prior art. [AF Question: should the “limitations” be spelled out explicitly here (e.g., need for coordination, lack of a clear winner, lack of a clear trade-off between energy savings and delay degradation)?]