The trend in electronics is to move more and more computing resources into portable and wireless applications. As a result, portable power handheld products and computers now account for nearly ten percent of total power usage. A significant portion of this ten percent is represented by battery charging efficiencies and the cost of manufacturing power delivery systems. Manufacturers have begun to realize that the size and run time demands of the newer types of portable equipment cannot be met by increasing the energy density in batteries.
To increase the functionality and run time of wireless devices manufacturers are turning to high efficiency management of system functions. When the amount of energy that is needed to complete a function is decreased the battery has more energy left to perform other processes. This approach to energy management has been applied to several common wireless functional blocks such as the digital signal processing (DSP) block and the microprocessor.
The shrinking of the size of integrated circuit process technology to the deep submicron range (less than thirteen hundredths of a micron (0.13 μm)) has caused the amount of leakage to equal or exceed the portion of dynamic power dissipation. Unless the leakage is reduced, the power delivery in the deep submicron era will ultimately restrict the ability of handheld wireless devices to meet the customer demand for improved capabilities.
The 2.5 GPRS (General Packet Radio Service) EDGE (Enhanced Data for GSM Evolution) standard for GSM (Global System for Mobile Communications) that presently dominates the cellular market presently uses relatively low efficiency linear power amplifiers to achieve the required data rate as well as to meet multimode and multiband requirements. Power amplifiers play a critical role in determining the efficiency of a radio frequency (RF) system because of their high output power levels, which can reach three watts (3.0 W) for some cellular systems. As a result, the design of highly efficient power amplifiers is of great importance in order to extend battery life.
A. Adaptive Voltage Scaling
As semiconductor process technology utilizes lower voltages and reaches deep submicron levels, the number of transistors that may be placed on an integrated circuit chip has increased according to Moore's Law. This development has presented two critical circuit design issues. The first issue is the non-uniformity of process parameters within a single integrated circuit die. The second issue is the increment of power consumption per integrated circuit die.
In deep submicron circuit design, variations due to the non-uniformity of process parameters within a single integrated circuit die cause differences in transistor and interconnect characteristics within a single integrated circuit die. These differences in turn impact the performance of the circuit because they generate deviations in MOSFET (Metal Oxide Semiconductor Field Effect Transistor) drive current, resulting in propagation delay distributions of the critical path across the integrated circuit chip. Furthermore, the distribution of process parameters expands from die to die within a wafer as well as within a lot.
After fabrication, operating variations such as power supply voltage, chip temperature and across-chip temperature also affect the propagation delay. By combining both operational and process induced variations, the propagation delay fluctuates from approximately eighteen percent (18%) to approximately thirty two percent (32%). The yield of CMOS (Complementary Metal Oxide Semiconductor) logic circuits satisfying a specific performance requirement is significantly influenced by the magnitude of critical path delay deviations due to both operational and intrinsic parameter fluctuations.
To compensate for the impact of these parameter fluctuations and to achieve a desired yield, there are two possible approaches. The first approach is to reduce the performance by operating at a lower clock frequency. The second approach is to increase the supply voltage.
While the operating frequency limits allowable propagation delay, this delay strongly depends on intrinsic process parameters, supply voltage and junction temperature (abbreviated With the letters PVT for parameters, voltage, and temperature). The propagation delay in a MOSFET is proportional to the product of the active resistance of the MOSFET and load capacitance in accordance with the following expressions:
                              R          ON                =                              V            DD                                              β              ⁡                              (                                                      V                    DD                                    -                                      V                    T                                                  )                                      α                                              (        1        )            CL=CD+CG+CW  (2)
The term RON is the active resistance of the MOSFET. The term CL is the load capacitance. The term α is a velocity saturation term. The term β is the process transconductance parameter. The term VDD is the supply voltage. The term VT is the threshold voltage. The term CD is the drain capacitance. The term CG is the gate capacitance. The term CW is the interconnect capacitance.
If a design is fabricated as the best process corner, and is operating at low temperature, in the worst case it needs less than three fourths (¾) of the minimum supply voltage. K. A. Bowman, Xinghai Tang, J. C. Eble, and J. D. Meindl, “Impact of Extrinsic and Intrinsic Parameter Fluctuations on CMOS Circuit Performance,” IEEE Journal of Solid State Circuits, Volume 35, No. 8, pp. 1186-1193, August 2000. Process parameters and operating junction temperature are not controllable, but the supply voltage is controllable. This results in opportunities to reduce power consumption by adjusting supply voltage with regard to process and temperature.
In many portable computing devices (e.g., MP3 players and digital cameras) the full processing power of a processor is not required all the time. There are certain times when an operating frequency can be reduced, and a lower frequency means a longer allowable delay. This longer time margin also allows a supply voltage level to be lowered whereas the applied lower voltage increases the propagation delay. Power consumption is quadratic with the supply voltage and is proportional to the operating frequency. Because of these relationships, reducing both the operating frequency and the supply voltage provides an excellent energy-efficient operation.
This technique is referred to as adaptive voltage scaling (AVS). Adaptive voltage scaling decreases power consumption without sacrificing performance provided that the tasks to be performed are finished within the allowed time. From the trade off between performance and energy consumption, supplying just enough voltage to a system at a given frequency represents its optimum power consumption.
For additional information on adaptive voltage scaling, refer to the following papers. T. D. Burd and R. W. Brodersenl, “Design Issues for Dynamic Voltage Scaling,” in 2000 Proceedings of the ISLPED Conference, pp. 9-14. G.-Y. Wei and Mark Horowitz, “A Fully Digital, Energy-Efficient Adaptive Power-Supply Regulator,” IEEE Journal of Solid State Circuits, Volume 34, No. 4, pp. 520-528, April 1999. D. W. Kang, “Low Power Digital Adaptive Voltage Controller Design Based on Hybrid Control and Reverse Phase Mode,” Ph.D. dissertation, Dept. Elec. and Comp. Eng., Northeastern University, Boston, Mass., 2003. J. Kim and Mark Horowitz, “An Efficient Digital Sliding Controller for Adaptive Power Supply Regulation,” in 2001 Proceedings of the VLSI Circuits Dig. Tech. Papers Conf., pp. 133-136.
Adaptive voltage scaling (AVS) in the general sense refers to a power supply rail that adjusts its voltage corresponding to the demands of its load. Its load could be any compliant electronic device. The enormous benefit of adaptive voltage scaling (AVS) is that for completing the same function, an AVS compliant processor will use thirty percent (30%) to sixty percent (60%) less energy than a fixed voltage processor. The way to reduce energy consumption in a processor, then, is not only to reduce the clock frequency as low as possible, but, more importantly, to reduce the core supply voltage to the minimum amount for a given clock frequency.
B. Low Voltage Differential Signaling
Serial data streaming techniques have become important in the field of interface microelectronics as manufacturers continue to adopt the technology for intra-system connections. As electronic technology has developed in sophistication, the inability of parallel transmission to accommodate higher speeds and wider words has become more apparent. Designing systems with wide parallel word paths has proved to be too cumbersome and has presented serious technical challenges in the areas of noise, power, speed and cost. Alternatively, low voltage differential signaling (LVDS) combines extremely low power consumption and exceptionally low electrical and radiated noise, with data transmission speeds that are hundreds of times faster—up to tens of GigaHertz—versus parallel single-ended signaling.
Differential signaling operates at the receiver by comparing the difference between two signals. The constant current used in most forms of differential signaling reduces the amount of noise induced into the electrical system. The opposing currents of the two signals that comprise the differential signal cancel out a large portion of the magnetic field of the other signal, thus reducing irradiated emissions. In this way, noise induced into one signal is conversely induced into the other signal, so that the difference between the two signals is not affected. This phenomenon allows the differential signal to operate at reduced signaling levels compared to single-ended signals, thus reducing emissions even further.
The greatest drawback to using parallel single-ended techniques, however, proved not to be the issue of speed, but the substantial increase in radiated noise from the greater number of parallel signals needed to provide the additional bits. A more attractive alternative is serialization via differential signaling techniques. Serialization not only reduces emissions to meet stringent government mandates but also limits the number of wires running through the small-diameter hinge connecting the base unit to the display. This further increases the mechanical integrity and reliability of the connection.
After crosspoint switching became available at the silicon level, serialization of the communications transmission network moved from the inter-system to the intra-system level. Transmitting the serialized data through the system in the same form it was sent over the network was the natural choice. The input signals must be synthesized to operate at faster speeds to meet the high-speed requirements of modern communication in the world today. Both optical and high-speed differential signaling are being investigated and used to achieve these goals.
Improved serialization techniques have opened the door to new applications, particularly in the ultra-portable realm of cell phones and battery powered devices. Cell phones, for example, are encountering many challenges such as high-resolution displays with more colors. Cameras and other convergent functions that are now appearing in cell phones add complexity to the challenge of sending signals across the hinge in clamshell designs (also called flip designs). The bi-directional microcontroller interface occasionally used to lower power between the base and the display poses a new challenge of how to efficiently provide serialization in opposite directions.
Bi-directional serialization provided the solution, but the microcontroller interface posed its own challenge for the serializer. In the past, all serializers were provided with a constant clock at the parallel data rate that was used to develop the high-speed serial clock used for the serial transmission of the data. This parallel clock was not something that was normally available with a microcontroller interface. With the latest serializers and deserializers, it is still necessary to provide a constant clock, but not necessarily at the same frequency as the interface.
Three additional considerations—power, size and cost—grew in importance with the use of serializers in ultra-portable consumer applications. Battery life, an important consideration in all mobile applications, becomes even more critical when the application itself limits the size of the battery, and the primary location of the device is in a pocket of the user rather than on a desktop.
For the last ten years at least, the industry has struggled with how to best integrate serialization. Some of these key integration issues center around the twofold need for high-speed clocking and highly tailored input/output (I/O). The integration of high-speed signaling has proved an extremely difficult challenge due to signaling power requirements and the noise that is generated by the rest of the device. A significant breakthrough for VLSI (Very Large Scale Integration) techniques, low voltage differential signaling (LVDS) is a fairly noise-free and immune technology that offers the double advantage of very low power consumption and excellent noise rejection characteristics. The power and electromagnetic interference (EMI) requirement of mobile applications provides another challenge in this area.
For the reasons described above single-ended parallel techniques have inherent drawbacks. The advantages of differential serial data streaming and advances in the manufacture of serializer/deserializer systems will likely accelerate the proliferation of serialized intra-system data interfaces in all application areas.
Data transmission between different digital signal processing integrated circuits (ICs) influences the power consumption and the system cost. Improvements in the performance of the input/output (I/O) bottleneck to communicate digitally between the integrated circuits (ICs) have been made at the expense of the increased on-chip power dissipation. Output switching times in the subnanosecond range result in the fact that the di/dt noise coupled into the power distribution network becomes unacceptable, especially when many output pads switch simultaneously.
Digital CMOS devices typically drive their output pads with a simple CMOS inverter which swings the output pad from ground to the supply voltage VDD, and the output inverter driven by a chain of scaled inverters is larger, in order to drive its huge capacitance output load quickly. This, in turn, contributes considerably to the total power dissipation of applications.
In submicron CMOS circuits, the dynamic factor accounts for eighty five percent (85%) to ninety percent (90%) of the total power consumption because the threshold voltage is commonly on the order of thirty five hundredths of a volt (0.35 V) to seven tenths of a volt (0.70 V) so that the leakage current is negligible. See, for example, M. Pedram, “Design Technologies for Low Power VLSI,” in Encyclopedia of Computer Science and Technology, Volume 36, Marcel Dekker, Inc., pp. 73-96, 1997.
Therefore, power dissipation in digital CMOS circuits is approximately:PCMOS≈Rdynamic=αfC(VDD)2  (3)
From Equation (3) the supply voltage reduction appears to be the most promising method to reduce dynamic power dissipation because of its quadratic dependency to power. Therefore, the power dissipation problem in transmission lines can be solved by reducing the signal swing on the output driver. The problem with this method is that the lower supply voltages will cause a reduction in performance as indicated in Equations (1) and (2). As the value of the supply voltage approaches the threshold voltage, the circuit delays abruptly become large because the output capacitance is charged and discharged very slowly.
The simple approach of sizing the final output driver stage sufficiently large to allow driving a terminated transmission line is problematic on the aspect of power dissipation. For example, with one and two tenth volt (1.2 V) logic swings, even the technique of terminating a fifty ohm (50Ω) line with a Thevenin equivalent fifty ohm (50Ω) resistor to six tenths of a volt (0.6 V) demands a current drive of twelve and one half milliamps (12.5 mA). A shape factor of several thousand is thus required to lower the resistance of the output driver to a level where the voltage drop across the output transistors is several percent of the swing.
On the other hand, low voltage differential signaling (LVDS) is an alternative for a low power and high performance interface. The terminator power dissipation is reduced by the square of the voltage swing, and the rise/fall times are also decreased due to its small rail-to-rail supply range. Because the di/dt effect is reduced due to the decrease in absolute current, the noise, as a fraction of the signal voltage swing, remains constant. The effective impedance of the device should still be a small fraction of the line impedance, in order to achieve acceptable noise margins. The output driver transistors operate in either the cutoff or linear regions. By controlling the gate-to-source voltage, this linear resistance can be forced to match the resistance of the external line.
It is not desirable to have a high power digital interface driving low power analog circuits for a total overall power reduction. A power saving alternative may be found in using low swing high speed serial data transmission. H. Ahzng, V. George, and J. M. Rabaey, “Low Swing On-Chip Signaling Techniques: Effectiveness and Robustness,” IEEE Journal of Solid State Circuits, Volume 8, No. 3, pp. 264-272, 2000.
This kind of high speed data transmission has to be asynchronous. To find an economic solution, clock recovery from the NRZ (Non-Return to Zero) data stream is required because in an asynchronous serial data link, there is no common clock connection between the data sender and receiver. Typically, a phase lock loop (PLL) is employed to phase lock to the data and control the frequency of a new, local clock. See, for example, B. Lai and R. C. Walker, “A Monolithic 622 Mb/s Clock Extraction Data Retiming Circuit,” Int. Solid State Circuits Conf., pp. 144-145, San Francisco, Calif., 1991. See also A. Pottbaecker, U. Langmann, and H.-U. Schreiber, “A Si Bipolar Phase and Frequency Detector IC for Clock Extraction up to 8 Gb/s,” IEEE Journal of Solid State Circuits, Vol. 27, No. 12, pp. 1747-1751, 1992. It would be desirable to have a low power high swing serial interface circuit with a digital clock and data recovery (CDR) circuit that does not employ a phase locked loop (PLL) circuit.
Therefore, there is a need in the art for a system and method that is capable of providing an improved low power high swing serial interface with a digital clock and data recovery (CDR) circuit. In particular, there is a need in the art for a system and method that is capable of providing a robust ultra low power high swing serial interface circuit with a digital clock and data recovery (CDR) circuit that does not employ a phase locked loop (PLL) circuit.