1. Field of the Invention
The present invention relates to a technology for generating images in response to music and in particular, to a system for generating graphical moving images in response to data obtained by interpreting music.
2. Description of the Related Art
A number of technologies for changing images using computer graphics (CG) in response to music already exist in the form of software games. One example is background visuals (BGV) by which images are changed in time to music which is secondary to the primary operation of the game advancement. The BGV technology first synchronizes the music and graphics, and is not intended for fine-tuning images using music control data. In addition, among such game software, there are no titles featuring objects moving dynamically and musically, such as dancers. Some titles have psychedelic images, but because of uneasiness thereof, players are soon tired of them. Furthermore, there are now titles which generate computer graphics with flashing lights or the like in response to music data such as MIDI data.
On the other hand, there are also those technologies which, without the aid of graphics, use image patterns to display motion images corresponding to a soundtrack. For example, a music/imaging device was disclosed in Japanese Unexamined Patent No. 63-170697, which determines the mood of the music output from an electronic instrument via a musical mood sensor; reads a plurality of image patterns in regular succession via select signals corresponding to this musical mood; and displays motion images such as dancing or geometric designs, according to the musical mood. However, under these existing technologies, the necessary music data is processed into the select signals by the musical mood sensor according to the musical mood, and it therefore is not possible to obtain motion images perfectly in sync with the original music.
In addition, using image pattern data such as in the above-mentioned music/imaging device, results in little variety, despite the abundance of data. In order to obtain diverse motion images better conforming to the music, it is necessary to prepare more image pattern data. Moreover, it was extremely difficult to satisfy the diverse needs of end-users. As once the settings were in place, users could not make changes to the displayed images as they wished.
Furthermore, when generating CG motion images based on music data, because this image generation occurs as an after-effect of the musical event, there is the risk of an image-generation time lag which cannot be ignored. Also, during interpolation for smooth motion images, it is not always possible to create CG animation in sync with the music, as changes in animation speed and skips of pictures in the keyframe positions may occur depending on the computer CG drawing capacity or variables in the CPU load. Moreover, when modeling instrument players with CG motion images in music applications, it is not possible to impart natural movements corresponding to the music data to these CG motion images just by individually controlling each portion of the image according to every piece of music data.