In the last decade, due to the development of semiconductor processing technology, digital signal processing is getting more and more complex and computing capability is getting more and more powerful. More digital signal processing creates more stringent requirements on the interface to the real analog signal world. Currently, the performance index of analog-to-digital converter and digital-to-analog converter is pushing towards higher resolution, higher conversion rate, and most important of all, lower power dissipation that is suitable for embedded SOC applications.
Current semiconductor processing technology limits the resolution of most of the ADC to around 10–12 bits. In order to get high resolution ADC circuit design, trimming in passive components or calibration circuit techniques are general solutions. The trimming technique is not a good approach due to its high implementation cost. Therefore, the calibration techniques using circuit design techniques are more popular currently.
The first self-calibration ADC proposed by U.C. Berkeley researcher Hae-Seung Lee in 1984 is based on a successive approximation ADC. During calibration mode, the capacitor error is measured through a successive-approximation algorithm, and the error is saved to on-chip memory. During normal mode, the conversion output data will recall from on-chip memory, which contains capacitor error information, and these errors will be removed before data is sent out. In this approach, dedicated calibration mode is necessary, which means that analog input data cannot convert during calibration mode. In addition, successive approximation ADC has the disadvantage of low conversion speed, which cannot be used in video applications.
Since 1990, most researchers have focused on the calibration algorithm of high-speed A/D architecture, especially on pipelined ADC. In 1992, Seung-Hoon Lee at Illinois University proposed a digital calibration algorithm for pipelined ADC. Instead of analog calibration, the digital calibration algorithm measures the pipelined error in the digital domain and corrects the error in the digital domain. Currently, most offline calibrations use a similar technique to Lee's approach, measuring errors in the digital domain and correcting errors in the digital domain with some extra analog circuits and extra control timing.
The offline calibration described above needs an extra operation mode for calibration. However, in some data transmission applications, extra timing may not be possible. Since late 1990, researchers have developed a calibration algorithm that runs in the background so that the calibration will not interrupt the normal operations of the entire system. In 1997, Un-Ku Moon at Oregon University proposed a skip-and-fill algorithm to make background calibration possible. This algorithm uses similar calibration algorithms to the aforementioned. However, in order to make background calibration possible, the measurement cycles are spread over a long period of time. The algorithm skips a sample for every few samples, and the ‘stolen’ sample time is used for measuring the error of non-ideality. The skipped-sampled output will be generated by the post digital signal processing.
In 1998, Joseph Ingino at Stanford University proposed a continuously calibrated pipelined ADC. In this case, an extra-pipelined stage is introduced. When normal operations begin, this extra stage is performing calibration. Once the calibration of the extra stage is done, the first stage of the pipelined ADC and the extra stage will swap. After that, the calibrated extra stage is the same as the first stage of the pipelined ADC and the original first stage of pipelined ADC is performing calibrations. In 2003, Boris Murmann at U.C. Berkeley proposed a pipelined ADC without any feedback loop for calibrations. The entire calibration algorithm is completed based on the digital-post-processing unit, which can run in the background. In this architecture, the pseudo-random signal causes the ADC to be operated in two-mode. Both modes can obtain correct results. In normal cases without any non-linearity error, the residue output difference of the same input between two modes should be the same all the time. However, if non-linearity in the inter-stage circuit occurs, the digital post-processing will correct the error.
The previous background technique proposed by Boris Murmann needs quite complex digital signal processing like multiplications for all the converted samples.
Although calibration techniques have been proposed to correct the mismatch errors (e.g., see U.S. Pat. Nos. 5,499,027, 6,529,149, 6,563,445, 6,720,895), they need other overheads to achieve and suffer from a variety of disadvantages. They are generally time-consuming, difficult to implement, and require additional structures.