A digital-to-analog converter (“DAC”) is used to convert a digital signal into an analog signal and is commonly used in radio frequency (RF) systems to convert a digital signal into an analog signal that can be mixed with a carrier frequency for transmission over a medium. A DAC samples a digital signal at a specified rate and converts the sampled values of the digital signal into corresponding analog values.
The precision of a DAC is based on the number of bits per sample and the sampling rate among other factors. For example, the number of bits per sample defines how many different values can be output by the DAC. In a simple case where a DAC can detect only two values (i.e. a 1-bit DAC), the generated analog waveform would vary between a high and low value (e.g. 0 and 5 volts). In contrast, a DAC capable of sampling 12 bits at a time (i.e. a 12-bit DAC) would be capable of outputting 4096 (212) different analog values. Because of the higher number of possible outputs, a higher-bit DAC can more precisely output a desired value.
Similarly, a DAC with a higher sampling rate is more precise than a DAC with a lower sampling rate. The time period between each sampling represents a time when the actual analog value is assumed. Specifically, the value of the analog signal between two samplings is defined not by a specified digital value but by the value of the analog output as it changes from one sampled value to the next. This assumption can be represented visibly as the smoothing of a curve between two points representing defined values of the analog output.
FIG. 1 illustrates an example of how a 4-bit DAC 100 converts a digital signal into an analog signal, V(t). As a 4-bit DAC, DAC 100 samples an input digital signal four bits at a time and can output 16 discrete values as identified by the lines on the y-axis of the graph. The sampling rate at which DAC 100 operates defines how frequently the digital signal is sampled.
Generally, there is a tradeoff between the resolution and the sampling rate of a DAC. For example, a DAC designed to operate at higher resolution (i.e. a DAC that samples a greater number of bits at a time) generally requires a lower frequency of sampling. Common examples of DACs include a 12-bit DAC that samples at a maximum frequency of 4 GHz and a 16-bit DAC that samples at a maximum frequency of 1 GHz. A 16-bit DAC at a sampling frequency of 4 GHz is not achievable in the current state-of-the-art.
Unfortunately, in many systems, it is desirable to perform digital-to-analog conversion at rates exceeding those obtainable using current DAC technology. For example, in high capacity systems, wideband waveform generators, electronic attack, and other systems, it may not be possible to obtain the high resolution and high-speed conversions necessary to implement a satisfactory system.
FIG. 2 illustrates how a DAC can be incapable of outputting an analog signal with a desired accuracy. The graph on the left side is the same as in FIG. 1 and represents the actual output of DAC 100. The graph on the right, however, represents a desired output for the system. As shown by the dashed line between points 103 and 104, the relatively smooth line (which would become smooth after the signal is passed through a low pass filter) is unobtainable between points 103 and 104 because DAC 100 cannot sample quickly enough to output the desired waveform. A similar result will occur if the DAC has insufficient resolution to output each desired value.