1. Field of the Invention.
The present invention relates to analog-to-digital converters and, more particularly, to integrating analog-to-digital converters.
Analog-to-digital converters are used to provide sequences of digital code representations of numbers in any chosen number base where each of such numbers corresponds to a point in an analog input signal waveform provided with respect to a reference value. In other words, the digital code representations of a sequence of numbers is related to the magnitudes of a corresponding sequence of points in such an analog signal and to the fixed magnitude of a reference level.
These analog-to-digital converters compare the magnitude taken by the analog input signal at a point in time to the reference level magnitude and attempt to provide an approximation of this relationship, as it occurs in the short sampling time interval, by a digital code representation. This process is repeated to give a sequence of digital code representations corresponding to sample points in the analog waveform. The conversion process may be expressed by the analog input signal magnitude being taken equal to the product of the reference level, the output "estimating number" that is represented by the digital code representation, and a "transfer function parameter" which is just equal to one for linear converters. However, several possibilities in the converter can permit variances to occur resulting in a conversion process so as to result in a nonlinear converter if the design of the converter is not carefully managed.
Another source of difficulty in the conversion process is the presence of noise on the analog input signal to be converted to a sequence of digital code representations. Since the conversion process, as previously described, provides a digital code representation for each corresponding point in the analog input signal which depends on the value of that signal at the exact point in time when the sample is taken, the output code sequence will differ from what it otherwise would have been in the presence of noise in the analog signal. Although such noise could be removed to a considerable extent by subsequent digital processing, there can be substantial value in eliminating any effect of the noise before the conversion is complete. Typically, this is done by using an analog-to-digital conversion technique in which the digital code representation depends on the time integral or average value of the analog input signal during some time interval at each point where a conversion is desired. Such integration or averaging of the signal sample leads to being able to give very repeatable results for the same analog waveform even in the presence of substantial amounts of noise occurring in connection with that signal. The effects of noise will be averaged out for those noise frequencies present within the analog input signal which have the reciprocal values thereof that are less than the time of the integration of the analog input signal about a sampling point.
Such analog-to-digital converters can be most conveniently and cheaply provided in monolithic integrated circuit chips. However, there is a strong need to minimize the amount of surface area in the major surface of the monolithic integrated circuit chip which must be devoted to implementing such analog-to-digital converters. This is especially true where multiple analog-to-digital converters are provided in one chip to accommodate multiple analog input signals. Often, integration of each of the analog signals must be done simultaneously with the others during the sampling thereof to eliminate unwanted timing differences which could otherwise arise in the conversion of these signals to their respective digital code sequences. Thus, an analog-to-digital converter is desired which integrates an analog signal at its input for a period of time as part of the conversion process and which can be implemented with taking up relatively little space at the surface of the monolithic integrated circuit chip.