Wavetable based sound synthesis is a popular sound synthesis for use in mobile telecommunication terminals. It has the advantage that a very high sound synthesis quality is achieved with a rather simple algorithm, which basically relies on processing and playing back previously recorded audio samples, called wavetables.
For the purpose of the music synthesis, the wavetables store the tones of real instruments that are recorded under different conditions, for instance using different pitches or musical notes, different note velocities, etc. Before the wavetables are actually included into the output audio sound, the raw wavetable data undergoes several signal processing operations, including decimation and interpolation for the purpose of pitch shifting the original note, amplitude modulation for the purpose of modeling the envelope of the output audio waveform, filtering, etc.
A signal processing operation that is extensively used in wavetable based sound synthesis is synchronous pitch-shifting. This operation is performed in order to modify the pitch of the recorded wavetable, which allows to synthesize higher or lower musical notes or tones. Basically, the operation is carried out by resampling the wavetable data by decimation and/or interpolation procedures, such that the pitch is increased or decreased without changing the output sampling rate. For instance, playing only every second sample from the wavetable data would caused a pitch increase by one octave and a reduction of the number of samples by half. In general, any pitch-shifting operation will alter the number of samples in the signal.
Modern mobile telecommunication terminals provide specific architectural features that should be exploited by any practical implementation of a wavetable sound synthesis. Often a terminal contains more than one processor. For instance, the terminal may comprise a micro controller unit (MCU) as main processor, as well as additionally one or more dedicated coprocessors. An example of such dedicated coprocessor is a digital signal processor (DSP) that is the preferred tool to perform computationally intensive operations characteristic to signal processing tasks.
In several hardware architectures, the size of the memory space addressable by different processors is different, and it might not always be possible to store the entire wavetable data in the memory space which is addressable by the very processor that is going to process it, for example the DSP. In such architectures, it might be necessary to store the wavetable data in a memory space which is addressable by some other processor, for example the MCU. The MCU then has to transfer the wavetable data to the DSP during playback. A technological solution for inter-processor communication consists in using a memory space addressable by both processors, called shared memory. Thus, taking care of avoiding conflicts, each processor can be allowed to access the shared memory at certain moments, in order to write data for the other processor, or to read data that was previously written for it by the other processor.
A known approach of implementing a wavetable based sound synthesis on such a multiprocessor architecture is to copy the entire wavetable data which is needed by all active voices from an MCU memory into a DSP memory for processing and playback.
Due to DSP memory limitations, such an approach can only be used for small polyphony synthesis, since the larger the number of active voices the higher the memory requirements for the DSP. In addition, very long wavetable data might also be difficult to utilize, since the size of a very long single wavetable might approach or even exceed the available DSP memory space. Summarized, the available DSP memory space might be insufficient to accommodate the entire wavetable data needed to produce a desired audio output.
The same problem may arise in any other wavetable based sound synthesis system having a first processor with sufficient memory space for storing the wavetable data and a second processor with sufficient computational power for processing the wavetable data.
In U.S. Pat. No. 6,100,461 A, a wavetable based sound synthesis system is described, in which wavetable data having a modified wavetable structure is transmitted in bursts from a memory to a wavetable audio synthesis device. In order to achieve an efficient transmission on a Peripheral Component Interconnect (PCI) bus, the data voice samples, that are 8 or 16 bits in length, are organized in units of 32-bits called frames. The group of samples transmitted in one burst comprises several such frames of data for a voice. It is assumed that the wavetable audio synthesis device has access to the main memory space where the entire wavetable data is stored, and that the data transfers between the main memory and the device can be carried out without involving the main processor of the host machine. As mentioned above, however, such a wavetable audio synthesis device is not always able to keep the entire wavetable data in the memory space that it is able to address, and hence an alternative solution is needed.