The present invention relates to a method and apparatus for performing morphing. Primarily, the present invention is intended for morphing between different sounds. However, the present invention can be used for other types of morphing, such as morphing between images.
The term “morphing” is generally used to describe a smooth transition between multiple states of an “object”, although it is usual to effect a transition between two states. For example, the first state of the object could be the image of a human head and the second state the image of a wolf's head.
In this example, the simplest transition possible would be to fade out one image while fading in the other image. This is known as cross-fading. However, this would be an implausible transition between the two displays for a viewer, who would not perceive that the human head is turning in to the wolf's head, and is not what is commonly meant by morphing.
Instead, what is commonly meant by morphing is the cross-fading of underlying properties of the object, not the result of those underlying properties. In the given example of morphing between a human head and a wolfs head, morphing involves establishing underlying properties of the image of the human head and corresponding underlying properties of the image of the wolf's head. These corresponding underlying properties are then cross-faded. For example, the positions, colours and shapes of corresponding portions of the respective images (such as the eyes, mouth and ears of the human and wolf's head respectively) would be determined and cross-faded smoothly. Thus, a viewer would see the eyes, mouth and ears of a human all separately changing into those of a wolf. This provides a much more plausible effect.
Similarly, morphing between two sounds does not mean cross-fading of the volume of two sounds, but rather continuously changing the underlying properties of the sound, such as timbre and pitch.
Music and other sound synthesisers are well-known. Commonly, synthesisers are computer-implemented, using either a standard personal computer (PC) with associated peripheral devices or a piano-style keyboard linked to a computer circuit. The synthesiser may also comprise a graphical user interface.
For example, a PC may have loaded thereon a music synthesiser application program, which has algorithms to implement, for example, different oscillators. Each oscillator may produce waveforms having different shapes and the frequency of each oscillator may be modified individually. The waveforms output by the oscillators may be mixed using a mixer—that is, added together or overlaid—to form a complex waveform, the relative strength or amplitude of the individual waveforms in the complex waveform also being controlled. Further shaping of the individual waveforms and of the complex waveform using one or more filters, such as a bandpass filter, can also be contemplated. Thus, a synthesiser may include oscillators (to generate repetitive waveforms), mixers (to combine waveforms), filters (to increase the strength of some overtones while reducing the strength of others) and amplifiers (to shape the contours of the waveforms). The output complex waveform can then be electrically sent to a speaker so that the user can listen to the created sound. In this way, a user is enabled to create a large number of distinct sounds. Of course, it should be recognised that different numbers of all these components and that other components can be used.
The settings used to create these distinct sounds can be saved and modified to create similar sounds having different tones, thereby forming a musical scale of a particular synthesised instrument. In addition, the timbre of the synthesised instruments can be controlled. The synthesiser program may also have algorithms representing a large number of pre-stored synthesised musical instruments, such as piano, drum kit, violin and so forth, so that the user does not have to individually create each distinct sound. Moreover, different tones output by different synthesised instruments for different amounts of time may be combined to create a medley of notes having various textures—in other words, music. In addition, a previously recorded sample of music may be mixed with the synthesised sounds. The sample might, for example, be a sung piece of music.
To facilitate musical control of the elements in the software-implemented synthesiser, the system 1 may comprise a GUI (Graphical User Interface) on a display screen controlled using a mouse and a standard PC keyboard. In addition, a piano-style keyboard may be used to “play” the synthesised instrument. Other controls, including wheels, sliders, switches and joysticks may also be provided, either together with or separate from the piano-style keyboard.
In addition to using simple oscillators to synthesise sounds, it is well known to model the oscillations of a vibrating string and to convert the results into sound using one or more simulated pickups. Thus, the vibration of each of the strings of a stringed instrument can be modelled by a sound synthesiser.
There are several possible approaches to modelling a vibrating string. One such approach is to describe the modelled string by means of a differential equation, which can then be solved numerically by means of a standard iterative method using a computer. Such an equation may taken into account variables such as what force is applied to the string at what time and what position, the mass per length of the string; the stiffness of the string; the tension of the string; losses associated with the stiffness of the string; losses associated with the tension of the string; and losses associated with the turbulent flow of the air surrounding the string.
Several methods for exciting such a simulated string and hence applying a force to the discrete elements are known. These include exciting the string in a percussive way, for example using a modelled piano hammer or a modelled plectrum to hit, pluck or otherwise strike the simulated string. Another way of exciting the string is to use a modelled bow, which mimics the action of a bow on a violin or cello.
Accordingly, synthesised sounds have a number of underlying parameters. These parameters can be readily controlled. For example, if a synthesiser simulates a string excited by a bow, the controllable set of parameters could include the positions of one or more simulated microphone pickups relative to the string, the string stiffness, losses affecting the string, bow position, bow pressure and bow speed. The same musical note can be played and the values of all these underlying parameters can be changed to effect morphing.
More specifically, a sound can be defined using a set of parameters. Values can be assigned to each of the parameters in the set in one arbitrary state and different values can be assigned to the same set of parameters in a second arbitrary state. If a single note is played, then the sound heard by the user will be different depending on whether the first state or the second state is selected. The sound can be morphed from the first state to the second state by cross-fading the values of the underlying parameters in the respective states. Accordingly, depending on the set of parameters chosen, it is for example possible to morph between one instrument, such as a piano, to another instrument, such as a saxophone, while playing a note or a tune.
Most morphing applications are performed in one dimension between two states. One example of this is morphing between a human head and a wolf's head, discussed above. Another example is morphing between two sounds using a specially provided modulation wheel on a synthesiser keyboard, for example as implemented in the Clavia Nordlead series. In this case, the state of the sound output by the synthesiser will depend on how far the wheel is rotated between the two endpoints, which respectively represent first and second states. Effectively, the state of the output sound is made up of a combination of the first state and the second state. The degrees to which the first and second states manifest themselves in the output sound depend on a weighting, which is determined by the rotation of the wheel.
Two dimensional morphing with more than two states is also known, for example in the VirSyn Cube synthesiser. An example of prior art two dimensional morphing will now be described with reference to FIG. 9, which shows a display 110. The display 110 represents four different states A, B, C and D as four points, the four points forming a square or rectangular two-dimensional pad 120. A cursor 130 is movable within the rectangular pad 120 to determine the weighting of each of the four states A, B, C and D in the output sound. If the cursor is positioned at one of the corners, the output sound takes only the state represented by that corner. If the cursor is positioned halfway between two points, the output sound takes a state in which the weighting of the two points is equal. If the cursor is positioned equidistantly from all four points, the output sound takes a state in which the weighting of each of the points is equal. Thus, morphing between four sounds is possible.
It is desired to effect morphing between more than four states. However, in prior art two dimensional morphing, there is no means by which a fifth point could be positioned without taking on a weighting of at least two of the other four states. Thus, there is no known method by which a fifth state can be added in the prior art.