1. Field of the Invention
This invention relates to improving sound synthesis models. In particular, a sound synthesis model is analyzed for accuracy and improved by correcting for discovered error.
2. Description of Related Art
Many sound synthesis methods attempt to emulate the sounds created by musical instruments, such as drums or pianos or horns. For example, digital sound synthesis methods attempt to mimic a sound by creating a signal which has a series of digital values that represent the amplitude of a sound wave. The most accurate digital method of emulation is sample synthesis. Sample synthesis synthesizes sound by playing a recording of a desired sound. Sample synthesis is commonly used in drum machines were only a few distinct sounds are synthesized.
In some applications, sample synthesis requires too much memory to be practical. For example, in a piano emulation, a digital recording of the lowest note may last up to 30 seconds. This is more than 2 megabytes of 16-bit values if the recording is sampled at a rate of 44.1 KHz. Multiply this by the 88 keys on a standard piano and the storage goes over 200 megabytes. Pianos also have different timbres depending on how hard a key is hit. A standard Musical Instrument Digital Interface (MIDI) has 128 different velocity curves so the storage now goes up to 30 gigabytes.
Even if all these sounds were recorded, you would still not have an instrument that sounds like a real piano. The effect of the damper pedal and inter-string coupling would be missing. The damper pedal couples all the strings together through a sound board. Further, when a chord is held down, with or without the damper pedal, the struck strings couple together.
A better sample synthesis might record combinations of keys being hit together. Taking all possible combinations of 2 out 88 keys, 3 out of 88 keys, and so on up to 88 out of 88 keys yields an astronomical number of combinations, and still does not take into account the effect of time offsets. Sample synthesis therefore cannot practically yield a perfect piano sound. The standard solution to this problem is to sample only some of the notes and then use models to interpolate the notes and combinations of notes not sampled.
There are many sound synthesis methods beside sample synthesis. Currently, the most prevalent synthesis method is "wave table" synthesis. Wave table synthesis uses two circular sound tables. One table represents the sound during the attack, and the other table represents the steady state. Two ADSR (Attack, Decay, Sustain, and Release) curves control the envelope for each table. For instruments that don't have a steady state, a third ADSR curve is typically used to control filter parameters. Often the attack table may be replaced by a sampled attack. This effect is what most "sampled" libraries do.
Wave guide synthesis is a music synthesis method that mimics a musical instrument using models based on the physical structure of the instrument. The theory of lossless wave guides simplifies calculations needed to model many musical instruments. A specific case of wave guide synthesis is the plucked string algorithm which may be used to emulate the sound of a plucked string. The plucked string algorithm involves filling a section of memory with initial data. The section of memory is called a delay line.
FIG. 1 shows a block diagram of a prior art sound emulation model 100 which uses the plucked string algorithm to produce a digital signal Y'. The emulation model 100 employs a delay length 101 and a feedback gain 102. To produce the digital signal Y', data is read sequentially from the delay line 101 and scaled by the feedback gain 102 to account for sound evolution. Alternatively, data from the delay line 101 may be filtered or otherwise processed. The output signal Y' is fed back into the delay length 101, typically by overwriting memory. Once the last of the data in the delay line 101 has been read, reading begins again from the beginning. Reading from the delay line 101 continues in circular fashion for the duration of the sound signal Y'. The sound signal Y' evolves because scaling changes values of data in the delay line 101 but also repeats at a frequency that depends on the number of data points in the delay line 101 and the rate at which the data is sampled.
Generally, a synthesis model based on prior art methods will not perfectly reproduce the sound made by a musical instrument, and methods are needed which improve the accuracy of a model but do not require excessive memory.