The ability of personal computers to generate high-quality musical sounds has assumed increasing importance in recent years. For this purpose, some manufacturers have equipped their personal computers with a set of variable frequency digital oscillators which repetitively sample one or more waveform buffers. Each oscillator reads out (at a fixed clock rate) every sample, every other sample, every third sample, etc. to produce a base frequency sound, its second harmonic, its third harmonic, etc. respectively. The amplitude of each oscillator's output can be varied by digital or analog means.
By adding the outputs of a plurality (e.g. 32) of these oscillators, it is possible to produce a 32-term Fourier series which can adequately define even a fairly complex musical waveform over a time interval corresponding to one cycle of the base frequency. This method is known as additive synthesis.
Theoretically, the above-described system can also generate speech, particularly the voiced parts of speech whose waveforms are structurally similar to music. In practice, however, speech generated by this method is flawed for two reasons: firstly, a straight Fourier expansion does not provide sufficient dynamic range for speech generation; and secondly, a Fourier expansion is not usuable with unvoiced sounds because unvoiced sounds have no fundamental frequency.