Musical sounds such as those produced by acoustic instruments are known to be generally, quasi-periodic. When a musical sound is analyzed by traditional means such as Fourier Analysis, a time variant spectrum associated with the sound is typically observed. To faithfully reproduce musical sound as heard by the ear, a synthesis method must therefore address the problem of producing time variant waveforms.
One existing method for synthesizing time variant waveforms is known as subtractive synthesis. Subtractive synthesis filters a steady state signal via a digital or analog filter whose frequency response can be changed in real time. Another time variant synthesis method, commonly referred to as FM synthesis, frequency modulates a signal and sums that signal with a steady state signal.
FM synthesis takes advantage of the time variant nature of the spectrum produced by frequency modulation. A third commonly employed method, harmonic interpolation synthesis, produces a time variant waveform by computing a spectrum from an existing spectrum via an interpolation algorithm, and a time dependent variable.
Music synthesizers employing these prior art methods cannot, however, accurately reproduce the entire range of acoustic musical instrument sounds. Further they offer only very limited or no means for real time modification of the time variant parameters which control the time variant nature of a sound. In the case of subtractive synthesis and FM synthesis, a sound is synthesized by trial and error until it is judged by the listener to be a reasonable facsimile of the desired acoustic instrument timbre. Harmonic interpolation synthesis implements a Fourier analysis approach which is cumbersome. Since a full audio bandwidth signal may have up to 1000 time varying spectral components, it would be extremely difficult to develop any meaningful time dependent interpolation algorithm for every "harmonic" in a musical spectrum.
Another technique used for acoustic musical reproduction is referred to as sampling. This technique simply digitizes an analog signal and stores it in memory. To accurately reproduce the entire range of a musical instrument of interest, it is necessary to store a digital representation for every note in the musical instrument's range. Since this practice requires an excessive amount of memory, most sampling instruments store a reduced number of waveform representations. Playing a note which has a different frequency or duration than the original sampled waveform stored in a sampling instrument's memory, produces a distorted version of the original signal. The distortion or error increases as a function of the difference in pitch between the played note and the originally sampled note. Distortion occurs when a sampled note's recorded duration differs from the it's playback duration. Typically, sampling instruments extend a sample's duration (during playback) by "looping" through a portion of the original sample for an extended period of time. This action changes the time variant nature of the originally sampled instrument, hence distorting the original signal.
It would be desirable to provide a process for analyzing the time variant nature of quasi-periodic signals, and an apparatus for synthesizing signals in real time from parameters derived from that process.