Digital-to-analog converters (DAC's) are widely used in a variety of situations in which analog signals must be produced from digital information provided as digital words. The number of bits of resolution in the digital word, the speed of conversion, the integral and differential linearity, and the accuracy of the conversion are dominant parameters which characterize a DAC. Some applications require very high speed DAC's, some require very high resolution DAC's, and many require highly linear, highly accurate DAC's. In many uses, high integral linearity and high accuracy are even more important than resolution. Thus, a DAC with five- to eight-bit resolution may be adequate where conversion to an accuracy of twelve or fourteen bits is required.
Such requirements for high accuracy place severe constraints on matching components within the DAC. A typical DAC will consist of many current sources, each of which is switched (i.e., turned on or off), depending on the value of the digital input. The currents are summed to produce the analog output current. The accuracy of the DAC output is fundamentally limited by the accuracy of these current sources. Typically, 2.sup.n -1 current sources, or fewer, are employed to provide the analog output. Lower resolution DAC's (e.g., up to about eight bits) can be made directly, by employing techniques such as R-2R ladders, binarily weighted current sources, and so forth. Conventional manufacturing process technology, however, generally allows only about 10-bit accuracy in matching components, such as current sources. Eight-bit accuracy is more typical from customary processes. A DAC accurate to twelve bits, however, typically requires more complex processing, because the simple component matching requirements can not be achieved using current process technologies. Thus, alternative approaches such as laser trimming, "zener zapping", and the like, are used. Unfortunately, these alternative approaches generally are associated with higher cost and lower manufacturing yields.