Context-sensitive musical performances have become key components of electronic and multimedia products such as stand-alone video games, computer based video games, computer based slide show presentations, computer animation, and other similar products and applications. As a result, music generating devices and/or music playback devices have been more highly integrated into electronic and multimedia products. Previously, musical accompaniment for multimedia products was provided in the form of pre-recorded music that could be retrieved and performed under various circumstances.
Using pre-recorded music for providing context-sensitive musical performances has several disadvantages. One disadvantage is that the pre-recorded music requires a substantial amount of memory storage. Another disadvantage is that the variety of music that can be provided using this approach is limited by the amount of available memory. The musical accompaniment for multimedia devices utilizing this approach is wasteful of memory resources and can be very repetitious.
Today, music generating devices are directly integrated into electronic and multimedia products for composing and providing context-sensitive, musical performances. These musical performances can be dynamically generated in response to various input parameters, real-time events, and conditions. For instance, in a graphically based adventure game, the background music can change from a happy, upbeat sound to a dark, eerie sound in response to a user entering into a cave, a basement, or some other generally mystical area. Thus, a user can experience the sensation of live musical accompaniment as he engages in a multimedia experience.
One way of accomplishing this is to define musical performances as combinations of chord progressions and note sequences, so that notes are calculated during a performance as a function of both a chord progression and a note sequence. A chord progression defines a time sequence of chords. An individual chord is defined as a plurality of notes, relative to an absolute music scale. A note sequence defines a time sequence of individual notes. The notes of a note sequence, however, are not defined in terms of the absolute music scale. Specifically, the notes are defined by their positions within chords, rather than by their absolute positions on a musical scale or keyboard. As a simple example, a note might be defined as the second note of a chord. This note would then vary, depending on which chord with which the note is played. The second note of a C chord is E, so an E is played when the note is interpreted in conjunction with a C chord. The second note of a G chord is B, so a B is played when the note is interpreted in conjunction with a G chord. Interpreting a chord in this manner is referred to as playing the note "against" a specified chord. The result of this is that the notes of a musical track are transposed or mapped to different pitches when played against different chords.
To generate actual output notes based on a chord progression and a note sequence, the notes of the note sequence are played against the chords of the chord progression. The chords of the progression have associated timing, so that any given note from the note sequence is matched with a particular chord of the progression. When the note is played, it is played against the current chord of the progression. This scheme allows a musical performance to be varied in subtle ways, by changing either the chord progression or the note sequence as the performance progresses.
Thus, one of the functions of a computer-based music performance engine is to derive a note sequence based on a note sequence and the chords of an underlying chord progression. Depending on the particular chords of the progression, a particular note generated from a note sequence might vary by a significant amount. In some cases, an output note may be transposed to a pitch that is outside of a desired range of pitch or the range of an instrument.
To prevent notes from being transposed beyond permissible or desirable ranges, the performance engine automatically inverts notes to keep them within a specified range. Inversion involves transposing a note up or down one or more octaves, thereby forcing the note to fall within a specified range of pitch.
Previous systems have used an upper-pitch and lower-pitch boundary to define a desired range of pitch. In these systems, each musical track (a track usually corresponds to a specific instrument or musical part) specified its own fixed inversion boundaries. The boundaries were specified in terms of MIDI (musical instrument digital interface) note values. When playing a note against a chord resulted in the note falling outside the one of the boundaries, the note was inverted by an appropriate number of octaves to bring it back within the boundaries.
Although previous inversion techniques were able to force a note sequence within a specified pitch range, this often had undesirable side effects. Specifically, the inversion of a note sometimes broke up a melodic run or line. A melodic run is a sequence of notes written with a specific harmonic relationship. Inverting individual notes within such a run can drastically alter the sound of the run, often causing it to lose its desired effect.
Another undesirable side effect of previous techniques was that inversion often changed the voicing of a specified chord. Again, this often produced an unacceptable change in the nature of the music.
Accordingly, there is a need for an improvement in the way automatic inversion is performed in systems such as the one described above.