With the age of electronics has come the age of electronic music. While purists may shriek in horror at some of the recent developments, they should remember that innovation and changes are healthy, if not always comfortable. After all, today's avant-garde music is tomorrow's elevator music.
One recent development is the synthesizer, and the prior art is replete with examples. In broad terms, the synthesizer allows the musician to define and refine the characteristics of a note in time domain (attack and decay) and in frequency domain (timbre). Since these are the very characteristics that differentiate the sound of a guitar from that of a piano, the synthesizer can be made to produce the sounds of known instruments as well as other sounds. Most synthesizers utilize a standard piano keyboard as the input device. Thus, the synthesizer has provided the musician, able to play only the piano, with all the instruments of the orchestra at his or her fingertips.
Of course, not everyone plays a piano. For those who do not, but do play wind instruments, synthesizers based on wind instruments have been developed. Some versions have a transducer built into a more-or-less standard instrument. These sense the actual acoustic vibrations, and construct sounds based on the sensed vibrations. In other wind instrument devices, the portions of the instrument that actually produce the vibrating column of air are dispensed with, but the keys or buttons remain. These devices have a mouthpiece of sorts, and sense parameters such as air velocity. These parameters, when combined with information on key or button depressions, allow the note to be determined. Thus the wind musician too can have all the instruments of the orchestra at his or her fingertips.
Of course, not everyone plays a piano or a wind instrument. For those who do not, but do sing, synthesizers for using a voice input have been developed (or at least proposed). U.S. Pat. No. 4,463,650 to Rupert discloses such a device. The voice input is sensed, and the fundamental frequency determined by a zero-crossing analysis. Depending on the type of instrument to be simulated, the appropriate waveform from a digital memory is read out at a clock rate determined by the voice frequency. However, the human voice has a much more complex waveform than does a vibrating reed, and for all but a trained female voice, it is no small technical feat to extract the fundamental frequency correctly and reliably.
Of course, not everyone plays a piano or a wind instrument or has a trained female voice.