1. Field of the Invention
The present invention relates to the field of voice synthesis and, more particularly to improving the expressivity of voiced sounds generated by a voice synthesiser.
2. Description of the Prior Art
In the last few years there has been tremendous progress in the development of voice synthesisers, especially in the context of text-to-speech (TTS) synthesisers. There are two main fundamental approaches to voice synthesis, the sampling approach (sometimes referred to as the concatenative or diphone-based approach) and the source-filter (or xe2x80x9carticulatoryxe2x80x9d approach). In this respect see xe2x80x9cComputer Sound Synthesis for the Electronic Musicianxe2x80x9d by E. R. Miranda, Focal Press, Oxford, UK, 1998.
The sampling approach makes use of an indexed database of digitally recorded short spoken segments, such as syllables, for example. When it is desired to produce an utterance, a playback engine then assembles the required words by sequentially combining the appropriate recorded short segments. In certain systems, some form of analysis is performed on the recorded sounds in order to enable them to be represented more effectively in the database. In others, the short spoken segments are recorded in encoded form: for example, in U.S. Pat. No. 3,982,070 and U.S. Pat. No. 3,995,116 the stored signals are the coefficients required by a phase vocoder in order to regenerate the sounds in question.
The sampling approach to voice synthesis is the approach that is generally preferred for building TTS systems and, indeed, it is the core technology used by most computer-speech systems currently on the market.
The source-filter approach produces sounds from scratch by mimicking the functioning of the human vocal tractxe2x80x94see FIG. 1. The source-filter model is based upon the insight that the production of vocal sounds can be simulated by generating a raw source signal that is subsequently moulded by a complex filter arrangement. In this context see, for example, xe2x80x9cSoftware for a Cascade/Parallel Formant Synthesiserxe2x80x9d by D. Klatt from the Journal of the Acoustical Society of America, 63(2), pp. 971-995, 1980.
In humans, the raw sound source corresponds to the outcome from the vibrations created by the glottis (opening between the vocal chords) and the complex filter corresponds to the vocal tract xe2x80x9ctubexe2x80x9d. The complex filter can be implemented in various ways. In general terms, the vocal tract is considered as a tube (with a side-branch for the nose) sub-divided into a number of cross-sections whose individual resonances are simulated by the filters.
In order to facilitate the specification of the parameters for these filters, the system is normally furnished with an interface that converts articulatory information (e.g. the positions of the tongue, jaw and lips during utterance of particular sounds) into filter parameters; hence the reason the source-filter model is sometimes referred to as the articulatory model (see xe2x80x9cArticulatory Model for the Study of Speech Productionxe2x80x9d by P. Mermelstein from the Journal of the Acoustical Society of America, 53(4), pp. 1070-1082, 1973). Utterances are then produced by telling the program how to move from one set of articulatory positions to the next, similar to a key-frame visual animation. In other words, a control unit controls the generation of a synthesised utterance by setting the parameters of the sound source(s) and the filters for each of a succession of time periods, in a manner which indicates how the system moves from one set of xe2x80x9carticulatory positionsxe2x80x9d, and source sounds, to the next in successive time periods.
There is a need for an improved voice synthesiser for use in research into the fundamental mechanisms of language evolution. Such research is being performed, for example, in order to improve the linguistic abilities of computer and robotic systems. One of these fundamental mechanisms involves the emergence of phonetic and prosodic repertoires. The study of these mechanisms requires a voice synthesiser that is able to: i) support evolutionary research paradigms, such as self-organisation and modularity, ii) support a unified form of knowledge representation for both vocal production and perception (so as to be able to support the assumption that the abilities to speak and to listen share the same sensory-motor mechanisms), and iii) speak and sing expressively (including emotion and paralinguistic features).
Synthesisers based on the sampling approach do not suit any of the three basic needs indicated above. Conversely, the source-filter approach is compatible with requirements i) and ii) above, but the systems that have been proposed so far need to be improved in order to best fulfil requirement iii).
The present inventor has found that the articulatory simulation used in conventional voice synthesisers based on the source-filter approach works satisfactorily for the filter part of the synthesiser but the importance of the source signal has been largely overlooked. Substantial improvements in the quality and flexibility of source-filter synthesis can be made by addressing the importance of the glottis more carefully.
The standard practice is to implement the source component using two generators: one generator of white noise (to simulate the production of consonants) and one generator of a periodic harmonic pulse (to simulate the production of vowels). The general structure of a voice synthesiser of this conventional type is illustrated in FIG. 2. By carefully controlling the amount of signal that each generator sends to the filters, one can roughly simulate whether the vocal folds are tensioned (for vowels) or not (for consonants). The main limitations with this method are:
a) The mixing of the noise signal with the pulse signal does not sound realistic: the noise and pulse signals do not blend well together because they are of a completely different nature. Moreover, the rapid switches from noise to pulse, and vice-versa (needed to make words with consonants and vowels) often produces a xe2x80x9cbuzzyxe2x80x9d voice.
b) The spectrum of the pulse signal is composed of harmonics of its fundamental frequency (i.e. FO, 2*FO, 2*(2*FO), 2*(2*(2*FO)) etc.). This implies a source signal whose components cannot vary before entering the filters, thus holding back the timbre quality of the voice.
c) The spectrum of the pulse signal has a fixed envelope where the energy of each of its harmonics decreases exponentially by xe2x88x926 dB as they double in frequency. A source signal that always has the same spectral shape undermines the flexibility to produce timbral nuances in the voice. Also, high frequency formants are prejudiced in the case where they need to be of higher energy value than the lower ones.
d) In addition to b) and c) above, the spectrum of the source signal lacks a dynamical trajectory: both frequency distances between the spectral components and their amplitudes are static from the outset to the end of a given time period. This lack of time-varying attributes impoverishes the prosody of the synthesised voice.
A particular speech synthesizer based on the source-filter approach has been proposed in U.S. Pat. No. 5,528,726 (Cook), in which different glottal source signals are synthesized. In this speech synthesizer, the filter arrangement uses a digital waveguide network and a parameter library is employed that stores sets of waveguide junction control parameters and associated glottal source signal parameters for generating sets of predefined speech signals. In this system, the basic glottal pulse making up the different glottal source signals is approximated by a waveform which begins as a raised cosine waveshape but then continues in a straight-line portion (closing edge) leading down to zero and remaining at zero for the rest of the period. The different glottal source signals are formed by varying the beginning and ending points of the closing edge, with fixed opening slope and time. Rather than storing representations of these different glottal source signals, the Cook system stores parameters of a Fourier series representation of the different source signals.
Although the Cook system involves a synthesis of different types of glottal source signal, based on parameters stored in a library, with a view to subsequent filtering by an arrangement modelling the vocal tract, the different types of source signal are generated based on a single cycle of a respective basic pulse waveform derived from a raised cosine function. More importantly, there is no optimisation of the different types of source signal with a view to improving expressivity of the final sound signal output from the global source-filter type synthesizer.
The preferred embodiments of the present invention provide a method and apparatus for voice synthesis adapted to fulfil all of the above requirements i)-iii) and to avoid the above limitations a) to d). In particular, the preferred embodiments of the invention improve expressivity of the synthesised voice (requirement iii) above), by making use of a parametrical library of source sound categories each corresponding to a respective morphological category.
The preferred embodiments of the present invention further provide a method and apparatus for voice synthesis in which the source signals are based on waveforms of variable length, notably waveforms corresponding to a short segment of a sound that may include more than one cycle of a repeating waveform of substantially any shape.
The preferred embodiments of the present invention yet further provide a method and apparatus for voice synthesis in which the source signal categories are derived based on analysis of real speech.
In the preferred embodiments of the present invention, the source component of a synthesiser based on the source-filter approach is improved by replacing the conventional pulse generator by a library of morphologically-based source sound categories that can be retrieved to produce utterances. The library stores parameters relating to different categories of sources tailored for respective specific classes of utterances, according to the general morphology of these utterances. Examples of typical classes are xe2x80x9cplosive consonant to open vowelxe2x80x9d, xe2x80x9cfront vowel to back vowelxe2x80x9d, a particular emotive timbre, etc. The general structure of this type of voice synthesiser according to the invention is indicated in FIG. 3.
Voice synthesis methods and apparatus according to the present invention enable an improvement to be obtained in the smoothness of the synthesised utterances, because signals representing consonants and vowels both emanate from the same type of source (rather than from noise and/or pulse sources).
According to the present invention it is preferred that the library should be xe2x80x9cparametricalxe2x80x9d, in other words the stored parameters are not the sounds themselves but parameters for sound synthesis. The resynthesised sound signals are then used as the raw sound signals which are input to the complex filter arrangement modelling the vocal tract. The stored parameters are derived from analysis of speech and these parameters can be manipulated in various ways, before resynthesis, in order to achieve better performance and more expressive variations.
The stored parameters may be phase vocoder module coefficients (for example coefficients for a digital tracking phase vocoder (TPV) or xe2x80x9coscillator bankxe2x80x9d vocoder), derived from the analysis of real speech data. Resynthesis of the raw sound signals by the phase vocoder is a type of additive re-synthesis that produces sound signals by converting Short Time Fourier Transform (STFT) data into amplitude and frequency trajectories (or envelopes) [see the book by E. R. Miranda quoted supra]. The output from the phase vocoder is supplied to the filter arrangement that simulates the vocal tract.
Implementation of the library as a parametrical library enables greater flexibility in the voice synthesis. More particularly, the source synthesis coefficients can be manipulated in order to simulate different glottal qualities. Moreover, phase vocoder-based spectral transformations can be made on the stored coefficients before resynthesis of the source sound, thereby making it possible to achieve richer prosody.
It is also advantageous to implement time-based transformations on the resynthesized source signal before it is fed to the filter arrangement. More particularly, the expressivity of the final speech signal can be enhanced by modifying the way in which the pitch of the source signal varies over time (and, thus, modifying the xe2x80x9cintonationxe2x80x9d of the final speech signal). The preferred technique for achieving this pitch transformation is the Pitch-Synchronous Overlap and Add (PSOLA) technique.