The terminology used here in regard to MIDI technology and to audio systems in general is as follows:
voice: a note played by a sound module including not only the synthesized voice provided by a sound generator, but also the voice produced by a digital audio effect. In other words, the term “voice” as used here includes a synthesized voice and an audio effect voice.
synthesizer application: a player and a sound module.
synthesizer: the building block/component of a synthesizer application that generates actual sound, i.e. a sound module, i.e. a musical instrument that produces sound by the use of electronic circuitry.
sequencer application: a sequencer and associated equipment.
sequencer: the building block/component of a sequencer application that plays or records information about sound, i.e. information used to produce sound; in MIDI, it is a device that plays or records MIDI events.
player: equipment that includes a sequencer.
sound generator: an oscillator, i.e. an algorithm or a circuit of a synthesizer that creates sound corresponding to a particular note, and so (since it is actual sound) having a particular timbre.
sound module: a synthesizer; contains sound generators and audio processing means for the generation of digital audio effects.
digital audio effect: audio signal processing effect used for changing the sound characteristics, i.e. mainly the timbre of the sound.
note: musical event/instruction that is used to represent musical score, control sound generation and digital audio effects. In other words, the term “note” as used here includes a musical score event, events for controlling the sound generator, and digital audio effects.
A standard MIDI (musical instrument digital interface) file (SMF) describes a musical composition (or, more generally, a succession of sounds) as a MIDI data sequence, i.e. it is in essence a data sequence providing a musical score. It is input to either a synthesizer application (in which case music corresponding to the MIDI file is produced in real time, i.e. the synthesizer application produces playback according to the MIDI file) or a sequencer application (in which case the data sequence can be captured, stored, edited, combined and replayed).
A MIDI player provides the data stream corresponding to a MIDI file to a sound module containing one or more note generators. A MIDI file provides instructions for producing sound for different channels, and each channel is mapped or assigned to one instrument. The sound module can produce the sound of a single voice or a sound having a single timbre (i.e. e.g. a particular kind of conventional instrument such as a violin or a trumpet, or a wholly imaginary instrument) or can produce the sound of several different voices or timbres at the same time (e.g. to the sound made by two different people singing the same notes at the same time, or a violin and a trumpet playing the same notes at the same time, or an electronic piano instrument that is commonly implemented using two layered voices that are slightly de-tuned, producing desired aesthetic tone modulation effects).
The terminology in connection with MIDI technology is such that a “note” is, or corresponds to, a “sound,” which may be produced by one or more “voices” each having a unique (and different) “timbre” (which is e.g. what sets apart the different sounds of middle C played on different instruments, appropriately transposed). Notes in a MIDI file are indicated by predetermined numbers, and different notes in a MIDI file for other than percussion instruments correspond to different musical notes, whereas for different notes for percussion correspond to (the one and only sound played by respective) different percussion instruments (bass drum vs. cymbal vs. bongo, and so on). A MIDI file can specify that at a particular point in time, instead of just one note (monophonic) of one particular timbre (monotimbral) being played (i.e. of one particular voice, such as e.g. the “voice” of a violin), several different notes (polyphonic) are to be played, each possibly using a different timbre (multitimbral).
The prior art teaches what is here called standard scalable polyphony MIDI (SP-MIDI), i.e. the prior art teaches providing with a MIDI file additional instructions as to how to interpret the MIDI file differently, depending on the capabilities of the MIDI compatible device (sequencer and sound modules). In essence, static SP-MIDI instructions—provided in the MIDI file—convey to a MIDI device the order in which channels are to be muted, or in other words masked, in case the MIDI device is not capable of creating all of the sounds indicated by the MIDI file. Thus, e.g. standard SP-MIDI instructions might convey that the corresponding MIDI file indicates at most nine channels and uses at most the polyphony of 20 notes at any time, but that if a certain channel number (the lowest priority) is dropped—say channel number three—leaving only eight channels, then the number of notes required drops to sixteen. Thus, if the MIDI device has the capability of producing only sixteen notes (of possibly different timbres), it would drop channel number three (patched e.g. to a saxophone sound), and so, in the estimation of the composer/creator of the MIDI file, would sound as good as possible on the limited-polyphony MIDI device.
To produce sound corresponding to a MIDI file requires resources, including e.g. oscillators for providing the sound module functionality and also resources equipment providing the sequencer functionality. The synthesizer and sequencer functionality is often provided by a general purpose microprocessor used to run all sorts of different applications, i.e. a programmable software MIDI synthesizer is used. For example, a mobile phone may be playing music according to a MIDI file and at the same time creating screens corresponding to different web pages being accessed over the Internet. In such a case the resources include computing resources (CPU processing and memory, e.g.), and the resources available for providing the synthesizer or sequencer functionality vary, and sometimes can drop to such a level that the mobile phone cannot, at least temporarily, perform the MIDI file “score” in the same way as before the decrease in available computing resources. As explained above, standard SP-MIDI allows for muting channels, but more importantly in case of resources that change in time, standard SP-MIDI helps by enabling the MIDI device to decrease in real-time the computing resources it needs by muting/masking predetermined channels; to adjust to changed resource availability, the MIDI device just calculates its channel masking again based on the new available resources. The composer can control the corresponding musical changes using the prioritization of MIDI channels and careful preparation of the scalable musical arrangement. Standard SP-MIDI content may even contain multiple so-called maximum instantaneous polyphony (MIP) messages anywhere in a MIDI file, in addition to (a required) such message at the beginning of the file, thus enabling indicating different muting strategies for different segments of a MIDI file.
While standard SP-MIDI does provide functionality in the time domain, it does not provide similar or corresponding functionality in respect to voice complexity. Standard SP-MIDI does not contain information about voices, only notes. With standard SP-MIDI, the synthesizer manufacturer must make sure that there are enough voices available for the required polyphony (number of notes simultaneously). For example, if any note uses two voices, according to standard SP-MIDI, there needs to be 40 voices available if the polyphony required by the content is 20; in other words, the synthesizer manufacturer must prepare for the worst case consumption of the voices in case of having only standard SP-MIDI available to composers to cope with different synthesizer capabilities.
Thus, what is needed is a more precise way of altering how a MIDI file is played on different MIDI devices with different capabilities, and ideally a more refined way of adapting to changes in resources available to a MIDI device while it is playing a MIDI file.