Unless otherwise indicated herein, approaches described in this section are not prior art to the claims listed below and are not admitted as prior art by inclusion in this section.
Wireless modems for mobile devices are continually evolving to support ever higher data rates, improve spectral efficiency and provide lower latency. Each new enhancement tends to increase processing requirements and hence power consumption. Although the batteries that power mobile devices are also increasing in capacity over time, the rate of capacity increase for batteries is much slower, and preservation of battery life is an increasingly important consideration in modem design.
Many of the circuit techniques used to meet the high throughput requirements do not scale well in power terms when the throughput is reduced, so that in some cases a 90% reduction in data rates might only reduce device power consumption by 10%. The efficiency (energy per bit) of transferring data is therefore much reduced at lower data rates, and while lower power modes of operation (for example, dynamic voltage/frequency scaling) are possible if the lower throughput is predictable, switching between low- and high-power modes is not instantaneous. The low latency requirement means that the wireless modems (herein interchangeably referred as “modems”) are required to respond very rapidly to sudden peaks in data traffic in reaction to information contained in the control channel, which limits some opportunities for power reduction.
This issue is better depicted with a simple example. For the case of 3rd Generation Partnership Project (3GPP) Long-Term Evolution (LTE), a User Equipment (UE) modem needs to receive and decode the Physical Downlink Control Channel (PDCCH) on every Transmission Time Interval (TTI, equal to 1 ms). The PDCCH enables the modem to determine how much data the network has sent to it within an individual TTI. Under the existing LTE standard, on every single successive TTI this modem needs to have all its internal circuitry ready to process a variable amount of data up to the highest possible downlink data rate according that its capability class can support. For an LTE Category 4 device the maximum instantaneous data rate is 150 Mbits per second. There are many scenarios in which the maximum data rate can be safely predicted never to exceed a level which can be several orders of magnitude lower than the maximum instantaneous data rate that the UE can support. Unfortunately, the 3GPP standard is at present constraining the modem to always be ready to process at the maximum rate, which prevents a modem from setting the modem circuitry at a far more power-efficient lower peak processing state, even when the data rates are known to be much lower.
An additional consideration is that the circuitry response time to re-configure modem for a lower processing capability is typically longer than the TTI duration used by a base station to schedule a varying amount of data to a specific UE within a given TTI. Given that the speed of data rate change is driven by the TTI duration, it is generally not possible to track the variations with circuitry configuration in order to reduce the modem data consumption.
For a voice call a modem might only need to process data in the order of 10 kbps, which is 15000 times below the peak device processing capability. The ratio between predicted peak processing capability for some modem use cases and device peak data rates is even larger for higher LTE device capabilities. This imbalance between predicted worst-case data rates and peak data rates is expected to increase even further for 5th Generation (5G) technology, where peak data capabilities could be of the order of 10 Gbits/s, which is six orders of magnitude (i.e., 1 million) times larger than the typical 10 kbit/s data rates required for voice communications.
These constraints are unnecessarily restrictive for many popular internet applications, which maintain a predictably low level of background data traffic and are designed to tolerate comparatively lengthy end-to-end delays in the transfer of larger volumes of data. In such scenarios, low latency and high instantaneous data rates are not necessary, and keeping the modem in a high state of alertness wastes battery power. This diminishes the user experience and ultimately reduces operator revenues by restricting the total time that the UE is available to communicate with the network.
In the existing art, network features such as discontinuous reception (DRX) and discontinuous transmission (DTX) are used to reduce the active duty cycle and thereby conserve power. Such techniques inevitably increase latency, but at the start of each reception period the modem must generally start up in its highest power state so that it can be immediately ready to receive data at the maximum rate if the control channel signals that active data is present. Provision is also made in the existing art to reduce device operating power in response to specific events, such as an increase in temperature or a low battery indication. In an alternative approach, a device can optionally terminate a communication if it detects that its power consumption exceeds a predetermined threshold. However, the network is generally in control of all the main operating parameters which affect the modem power consumption. As the modem is always required to be able to operate at the maximum capability of the device category under which it registered to the network, there is limited scope for the modem to actively manage its own power consumption to maximize battery life.
Moreover, thermal dissipation is increasingly becoming a problem in the highest-performance wireless modems, whether or not they are battery powered. It is likely that in the future some modems may only be able to offer maximum throughput for limited periods before the internal temperature rise becomes excessive to a point that it is necessary to constrain power usage to remain within operating temperature limits. If mitigation strategies for such problems are to be most effective, they should be driven by the device rather than the network, as the network cannot reasonably maintain a knowledge base of the thermal characteristics of every device on the market.