Musical timing is based on a regular grid of beats and measures, and most music creation software acknowledges this by providing a fixed musical grid that helps users align musical events to the musical grid. Editing and creating music in software is made much easier by providing this grid of measures, beats, and subdivisions of beats, and by allowing editing operations to easily snap to the same. However, when the music is rigidly aligned to a grid, the finished composition may have a stiff, robotic feel.
In music that is being or was performed, experienced musicians know how to intentionally stray from this grid, playing certain beats early, and others late, and by doing so create a distinctive “feel” or “groove” to their music. To some extent, this application of “groove” has been available to electronic musicians. For example, “swing” and “shuffle” are simple timing transformations that have been available to electronic musicians in the past. But more complex transformations have remained unavailable.
Another disadvantage to the use of prior musical grids are that the media used within a project are not always well-“quantized”. Quantization refers to the placement of notes in precise positions and patterns, based on an ideal musical grid. As a result, if the events that contain the media are snapped to the musical grid, the notes and rhythms within the event will not necessarily synchronize well with the contents of other events in the project. One solution for non-quantized music is to “quantize” the contents of events to an “ideal” musical grid. As before, however, this removes much of the human feel of the musical content as it was originally produced.
Prior attempts to modify the above, sometimes called “groove timing”, have generally involved the above as applied either to an entire project or to individual tracks or source media.
Other modifications have involved applying a simple or complex groove to be applied to event data. Such a transformation may be performed in several ways. First, the event's contents may be transformed, such as by rewriting to a new media file or a virtual file. Next, the event may be split into smaller events, some or all of which are shifted in time. Finally, a number of selected events may have their position shifted on the timeline.
Another prior attempt at groove timing involved adjusting the timing of audio data, such as to apply swing, shuffle, or quantization, with “time stretching” algorithms that subdivide the audio into tiny segments, and either inserting silence, inserting repeated portions of the audio stream, inserting cross-fading of adjacent portions of the audio, or combinations of the above.
And another attempt at groove timing involved the creation of “groove templates”. These are generally produced by professionals, as the detailed timing variations which comprise a “human feel” are often subtle and are not easily perceived in isolation, although they may be pronounced when heard in the context of a complete piece of music. However, the creation of groove templates is difficult because the tools employed for doing so are unwieldy and, amongst other limitations, do not provide interactive feedback on what effect each adjustment to a template will produce when applied to musical material.
In the above attempts to modify the groove of a media file, the media file itself is generally the subject of the modification. That is, in some prior attempts, the application was a one-time destructive process because applying the groove altered the underlying musical data. As a result, if the user desires to remove or undo the groove, or apply a different groove, the media file must be modified again, back to its original form, which may or may not be possible. There is no convenient way to reversibly test the application of a groove to a media file, or alternatively to apply a groove without altering the underlying media file.