The present invention generally concerns circuits and methods for sensing voltage feedback information in digitally programmable voltage sources.
Most modern electronic devices depend on tightly regulated sources of electrical energy for their operation. In a typical arrangement, the flow of energy is regulated in a manner that ensures a constant voltage at the power supply terminals of the powered devices. The performance of these devices (e.g., the speed, consumed power, error rate, reliability, etc.) depends strongly on the magnitude of the voltage supplied by the power source. Therefore, depending on the system considerations and operational objectives this voltage may need to be adjusted and regulated with great precision. Moreover, many auxiliary parameters of the power source may need to be modified as well. Among the most frequently encountered are protection limits (e.g., overvoltage, undervoltage, overcurrent), startup/shutdown options (e.g., delay, ramp rate), feedback loop compensation and others. The necessity to frequently alter these parameters forces designers to leverage digital techniques in constructing circuits to control the flow of power. Digital technology allows such modifications to be performed via communication with a supervisory entity without a need for physical modification. It can also be performed in the system, sometimes even without interrupting the operation of the equipment.
One of the major challenges in implementing digital control in power conversion is the ability to sense the voltage generated by the converter (output voltage) as accurately as possible. The error in sensing this parameter cannot be compensated in any way within the control circuitry and will adversely affect the quality of the operation to the degree proportional to the sensing error.
The standard way of developing voltage feedback information in power supplies is based on providing a stable and precise reference voltage source equal to the desired output voltage and comparing it with the actual output voltage. The difference is then amplified and its frequency characteristic compensated (modified) to achieve dynamic regulation objectives (e.g., stability, disturbance rejection, robustness, etc).
From this model one can derive two sources of steady state error in sensing the output voltage: one coming from the inaccurate magnitude of the voltage reference and the second from the offset error introduced during the process of subtracting it from the actual output voltage. Multiple techniques have been developed to minimize these errors to the acceptable level.
Digital technology changes the nature of voltage sensing error slightly. In the typical implementation of a digitally programmable voltage source, as shown in FIG. 1, the amplitude of the output voltage is converted to digital form by the analog to digital (A/D) converter 10. Next, the digital signal representing the reference (target) voltage 11 is subtracted from the digitized, measured value of the output voltage by a digital subtraction circuit 12 to obtain the magnitude of error. The result (error in a digital form) is subsequently used to modify the rate of flow of power in the power conversion subsystem of the voltage source according to the methods known in the art.
This arrangement keeps the first source of the error (the magnitude of the analog reference voltage) unaffected, even though the analog reference shifts to the A/D converter 10. The second source of error, the offset of the subtracting entity, is absent (digital subtracting does not suffer from offset error), but is replaced by three other sources of inaccuracy: (1) A/D conversion error (other than the analog reference error), (2) digitization resolution error (also known as a quantization noise), and (3) conversion delay error. A/D conversion error may be caused by such factors as non-ideal ratios of resistors or capacitors used for the conversion, clock charge feed-through, sample and hold error, etc. Digitization resolution error is caused by the fact that the output voltage, which can assume any value over a continuous range (as an analog quantity), must be converted to a digital representation which can assume only discrete values out of the set realizable for a given A/D converter. Conversion delay is caused by the finite time that is needed for an A/D converter to convert the analog voltage present in its inputs into the digital representation, and is equivalent to the signal group delay.
These three errors, in certain applications, necessitate the use of a high performance, high quality A/D converter. Such an A/D converter may need to have a 12 bit resolution, low differential and integral non-linearity, and a high sampling rate (at least equal to the power switching frequency or higher). An A/D converter meeting such requirements is expensive, consumes a lot of power and takes a lot of space on the die. As such, it is one of the most important limitations in introducing digital control into high volume, low cost power converters.
Due to the specific properties of the power supply as a regulation system, however, it is possible to simplify the structure of the A/D converter dramatically while retaining its full utility in such applications. The modification comes from the simple observation that accurate feedback information is needed only if the system is able to maintain its output close to the desired value. If the deviation exceeds a certain maximum amplitude it is a sign of serious malfunction and the device should protect itself and the system it powers by shutting down as soon as possible. Such behavior does not reduce the utility of the power supply because in modem electronics the tolerance of the system to the voltage deviation is minimal. If the supply voltage cannot be maintained close to the optimal value, the operation of the system cannot be maintained and it should shut down to minimize the risk of further damage.
If, on the other hand, the supply voltage can be maintained close to the optimal value, it is possible to use a voltage feedback A/D converter that is designed to operate only within a narrow range around the target voltage, as shown in converter of FIG. 2. In the extreme case, the resolution can be limited to one bit (output voltage above the target or below the target). This technique is known in control theory and used in practice in many industries as a hysteretic, bi-stable (on-off) or sliding mode control. Typically, resolving the sign and the amplitude of the regulation error needed to switch the controls is not recognized as A/D conversion, but the principle is still exactly the same.
Practical considerations associated with power conversion, like precise regulation, minimization of the output ripple, good dynamic operation, noise rejection and robustness, however, require feedback information with more resolution. One possible structure of this type of digital feedback sensing characterized by great simplicity was proposed in Gu-Yeon Wei, Mark Horowitz, “A Low Power Switching Power Supply for Self-Clocked Systems”, 1996 International Symposium on Low Power Electronics and Design, pp. 313–318. This circuit takes advantage of the fact that the delay of the standard digital gate depends strongly on the supply voltage. It is then possible to construct a voltage controlled oscillator. The frequency of this oscillator is then compared with the reference frequency, and the difference represents the error. A similar principle is used as the basis of the circuit presented in Benjamin J. Patella, Aleksander Prodic, Art Zinger, Dragan Maksimovic, “High Frequency Digital Controller IC for DC/DC Converters”, IEEE Applied Power Electronics Conference, March 2002, pp. 374–380. Here, the propagation through the delay line having Vdd connected to the measured voltage is evaluated as a proxy for the output voltage. Both these solutions sacrifice accuracy and the possible adjustment range of the output voltage for simplicity.
A more universal approach (similar to FIG. 2) is presented in Angel V. Peterchev, Jinwen Xiao, Seth R. Sanders, “Architecture and IC Implementation of a Digital VRM Controller”, IEEE Transactions on Power Electronics, January 2003, pp. 356–364. In this approach, the process of obtaining the error signal is performed in the analog domain. It is based on subtracting the output voltage from the reference generated by a DAC (Digital to Analog Converter). The high resolution required for precise regulation in this approach is shifted from a fast A/D converter to a slow DAC with great savings in complexity, cost and power consumption. Consequently, only the difference between these voltages is converted to the digital form. The Peterchev reference advises that 3 to 4 bits of resolution are sufficient (even less if a non-linear control scheme is used). A suitable flash A/D converter can be implemented with moderate resources. This approach allows for precise control in the range, the center of which is set by the DAC.
This technique, even though more precise then those described above, still suffers from offset and drift errors introduced by the multiple amplifiers that are necessary for implementing this structure. The exact sources of error depends on the details of the implementation, but typically the following errors may be distinguished: (i) internal DAC amplifier error; (ii) differential to single ended feedback voltage conversion error; (iii) analog reference and feedback voltage subtraction error; and (iv) flash A/D comparator offset error. The magnitude of these errors typically ranges between a few millivolts to a few tens of millivolts over the span of operating conditions. These errors cause errors in the feedback information, which in turn leads to an erroneous voltage produced by the power supply. Moreover, potential nonmonotonicity of the comparator ladder in a flash A/D converter (overlapping of adjacent threshold levels) may result in the loss of stability. As trimming large number of comparators is not practical, auto-zeroing topologies must be used. This, in turn, results in increased complexity, size, power consumption and slower operation.
An alternative way of solving this problem would be an analog preamplifier. It allows increasing the amplitude of the analog signal before it enters the A/D converter, thus reducing the relative importance of individual comparators errors (reducing differential non-linearity). Such preamplifiers, however, introduce their own offset and drift errors. They also introduce additional errors due to variation of the gain, resulting in degrading integral non-linearity.
Accordingly, there exists a need for an amplifier that is capable of achieving the aforementioned objectives, but with a simpler, more compact and more accurate structure.