This invention relates to speech synthesis systems. More particularly, this invention relates to the modeling of phoneme duration in speech synthesis.
Speech is used to communicate information from a speaker to a listener. Human speech production involves thought conveyance through a series of neurological processes and muscular movements to produce an acoustic sound pressure wave. To achieve speech, a speaker converts an idea into a linguistic structure by choosing appropriate words or phrases to represent the idea, orders the words or phrases based on grammatical rules of a language, and adds any additional local or global characteristics such as pitch intonation, duration, and stress to emphasize aspects important for overall meaning. Therefore, once a speaker has formed a thought to be communicated to a listener, they construct a phrase or sentence by choosing from a collection of finite mutually exclusive sounds, or phonemes. Following phrase or sentence construction, the human brain produces a sequence of motor commands that move the various muscles of the vocal system to produce the desired sound pressure wave.
Speech can be characterized in terms of acoustic-phonetics and articulatory phonetics. Acoustic-phonetics are described as the frequency structure, time waveform characteristics of speech. Acoustic-phonetics show the spectral characteristics of the speech wave to be time-varying, or nonstationary, since the physical system changes rapidly over time. Consequently, speech can be divided into sound segments that possess similar acoustic properties over short periods of time. A time waveform of a speech, signal is used to determine signal periodicities, intensities, durations, and boundaries of individual speech sounds. This time waveform indicates that speech is not a string of discrete well-formed sounds, but rather a series of steady-state or target sounds with intermediate transitions. The preceding and succeeding sound in a string can grossly affect whether a target is reached completely, how long it is held, and other finer details of the sound. As the string of sounds forming a particular utterance are continuous, there exists an interplay between the sounds of the utterance called coarticulation.
Coarticulation is the term used to refer to the change in phoneme articulation and acoustics caused by the influence of another sound in the same utterance.
Articulatory phonetics are described as the manner or place of articulation or the manner or place of adjustment and movement of speech organs involved in pronouncing an utterance. Changes found in the speech waveform are a direct consequence of movements of the speech system articulators, which rarely remain fixed for any sustained period of time. The speech system articulators are defined as the finer human anatomical components that move to different positions to produce various speech sounds. The speech system articulators comprise the vocal folds or vocal cords, the soft palate or velum, the tongue, the teeth, the lips, the uvula, and the mandible or jaw. These articulators determine the properties of the speech system because they are responsible for regions of emphasis, or resonances, and deemphasis, or antiresonances, for each sound in a speech signal spectrum. These resonances are a consequence of the articulators having formed various acoustical cavities and subcavities out of the vocal tract cavities. Therefore, each vocal tract shape is characterized by a set of resonant frequencies. Since these resonances tend to xe2x80x9cformxe2x80x9d the overall spectrum they are referred to as formants.
One prior art approach to speech synthesis is the formant synthesis approach. The formant synthesis approach is based on a mathematical model of the human vocal tract in which a time domain-speech signal is Fourier transformed. The transformed signal is evaluated for each formant, and the speech synthesis system is programmed to recreate the formants associated with particular sounds. The problem with the formant synthesis approach is that the transition between individual sounds is difficult to recreate. This results in synthetic speech that sounds contrived and unnatural.
While speech production involves a complex sequence of articulatory movements timed so that vocal tract shapes occur in a desired phoneme sequence order, expressive uses of speech depend on tonal patterns of pitch, syllable stresses, and timing to form rhythmic speech patterns. Timing and rhythms of speech provide a significant contribution to the formal linguistic structure of speech communication. The tonal and rhythmic aspects of speech are referred to as the prosodic features. The acoustic patterns of prosodic features are heard in changes in duration, intensity, fundamental frequency, and spectral patterns of the individual phonemes.
A phoneme is the basic theoretical unit for describing how speech conveys linguistic meaning. As such, the phonemes of a language comprise a minimal theoretical set of units that are sufficient to convey all meaning in the language; this is to be compared with the actual sounds that are produced in speaking, which speech scientists call allophones. For American English, there are approximately 50 phonemes which are made up of vowels, semivowels, diphthongs, and consonants. Each phoneme can be considered to be a code that consists of a unique set of articulatory gestures. If speakers could exactly and consistently produce these phoneme sounds, speech would amount to a stream of discrete codes. However, because of many different factors including, for example, accents, gender, and coarticulatory effects, every phoneme has a variety of acoustic manifestations in the course of flowing speech. Thus, from an acoustical point of view, the phoneme actually represents a class of sounds that convey the same meaning.
The most abstract problem involved in speech synthesis is enabling the speech synthesis system with the appropriate language constraints. Whether phones, phonemes, syllables, or words are viewed as the basic unit of speech, language, or linguistic, constraints are generally concerned with how these fundamental units may be concatenated, in what order, in what context, and with what intended meaning. For example, if a speaker is asked to voice a phoneme in isolation, the phoneme will be clearly identifiable in the acoustic waveform. However, when spoken in context, phoneme boundaries become difficult to label because of the physical properties of the speech articulators. Since the vocal tract articulators consist of human tissue, their positioning from one phoneme to the next is executed by movement of muscles that control articulator movement. As such, the duration of a phoneme and the transition between phonemes can modify the manner in which a phoneme is produced. Therefore, associated with each phoneme is a collection of allophones, or variations on phones, that represent acoustic variations of the basic phoneme unit. Allophones represent the permissible freedom allowed within a particular language in producing a phoneme, and this flexibility is dependent on the phoneme as well as on the phoneme position within an utterance.
Another prior art approach to speech synthesis is the concatenation approach. The concatenation approach is more flexible than the formant synthesis approach because, in combining diphone sounds from different stored words to form new words, the concatenation approach better handles the transition between phoneme sounds. The concatenation approach is also advantageous because it eliminates the decision on which formant or which portion of the frequency band of a particular sound is to be used in the synthesis of the sound. The disadvantage of the concatenation approach is that discontinuities occur when the diphones from different words are combined to form new words. These discontinuities are the result of slight differences in frequency, magnitude, and phase between different diphones.
In using the concatenation approach for speech synthesis, four elements are frequently used to produce an acoustic sequence. These four elements comprise a library of diphones, a processing approach for combining the diphones of the library, information regarding the acoustic patterns of the prosodic feature of duration for the diphones, and information regarding the acoustic patterns of the prosodic feature of pitch for the diphones.
As previously discussed, in natural human speech the durations of phonetic segments are strongly dependent on contextual factors including, but not limited to, the identities of surrounding segments, within-word position, and presence of phase boundaries. For synthetic speech to sound natural, these duration patterns must be closely reproduced by automatic text-to-speech systems. Two prior art approaches have been followed for duration prediction: general classification techniques, such as decision trees and neural networks; and sum-of-products methods based on multiple linear regression either in the linear or the log domain.
These two approaches to speech synthesis differ in the amount of linguistic knowledge required. These approaches also differ in the behavior of the model in situations not encountered during training. General classification techniques are almost always completely data-driven and, therefore, require a large amount of training data. Furthermore, they cope with never-encountered circumstances by using coarser representations thereby sacrificing resolution. In contrast, sum-of-products models embody a great deal of linguistic knowledge, which makes them more robust to the absence of data. In addition, the sum-of-products models predict durations for never-encountered contexts through interpolation, making use of the ordered structure uncovered during analysis of the data. Given the typical size of training corpora currently available, the sum-of-products approach tends to outperform the general classification approach, particularly when cross-corpus evaluation is considered. Thus, sum-of-products models are typically preferred.
When sum-of-products models are applied in the linear domain, they lead to various derivatives of the original additive model. When they are applied in the log domain, they lead to multiplicative models. The evidence appears to indicate that multiplicative duration models perform better than additive duration models because the distributions tend to be less skewed after the log transform. The multiplicative duration models also perform better because the fractional approach underlying multiplicative models is better suited for the small durations encountered with phonemes.
The origin of the sum-of-products approach, as applied to duration data, can be traced to the axiomatic measurement theorem. This theorem states that under certain conditions the duration function D can be described by the generalized additive model given by                                           D            ⁡                          (                                                f                  1                                ,                                  f                  2                                ,                                  …                  ⁢                                      xe2x80x83                                    ⁢                                      f                    N                                                              )                                =                      F            ⁡                          [                                                ∑                                      i                    =                    1                                    N                                ⁢                                                      ∑                                          j                      =                      1                                                              M                      i                                                        ⁢                                                            a                                              i                        ,                        j                                                              ⁢                                                                  f                        i                                            ⁡                                              (                        j                        )                                                                                                        ]                                      ,                            (        1        )            
where ƒi(i=1, . . . , N) represents the ith contextual factor influencing D, Mi is the number of values that ƒi can take, xcex1i,j is the factor scale corresponding to the jth value of factor ƒi denoted by ƒi(j), and F is an unknown monotonically increasing transformation. Thus, F(x)=x corresponds to the additive case and F(x)=exp(x) corresponds to the multiplicative case.
The conditions under which the duration function can be described by equation 1 have to do with factor independence. Specifically, a function F can be constructed having a set of factor scales xcex1i,j such that equation 1 holds only if joint independence holds for all subsets of 2, 3, . . . , N factors. Typically, this is not going to be the case for duration data because, for example, it is well known that the interaction between accent and phrasal position significantly influences vowel duration. Thus, accent and phrasal position are not independent factors.
In contrast, such dependent interactions tend to be well-behaved in that their effects are amplificatory rather than reversed or otherwise permuted. This has formed the basis of a regularity argument in favor of the application of equation 1 in spite of the dependent interactions. Although the assumption of joint independence is violated, the regular patterns of amplificatory interactions, make it plausible that some sum-of-products model will fit appropriately transformed durations.
Therefore, the problem is that violating the joint independence assumption may substantially complicate the search for the transformation F. So far only strictly increasing functionals have been considered, such as F(x)=x and F(x)=exp(x). But the optimal transformation F may no longer be strictly increasing, opening up the possibility of inflection points, or even discontinuities. If this were the case, then the exponential transformation implied in the multiplicative model would not be the best choice. Consequently, there is a need for a functional transformation that, in the presence of amplificatory interactions, improves the duration modeling of phonemes in a synthetic speech generator.
A method and an apparatus for improved duration modeling of phonemes in a speech synthesis system are provided. According to one aspect of the invention, text is received into a processor of a speech synthesis system. The received text is processed using a sum-of-products phoneme duration model hosted on the speech synthesis system. The phoneme duration model, which is used along with a phoneme pitch model, is produced by developing a non-exponential functional transformation form for use with a generalized additive model. The non-exponential functional transformation form comprises a root sinusoidal transformation that is controlled in response to a minimum phoneme duration and a maximum phoneme duration. The minimum and maximum phoneme durations are observed in training data.
The received text is processed by specifying at least one of a number of contextual factors for the generalized additive model. The number of contextual factors may comprise an interaction between accent and the identity of a following phoneme, an interaction between accent and the identity of a preceding phoneme, an interaction between accent and a number of phonemes to the end of an utterance, a number of syllables to a nuclear accent of an utterance, a number of syllables to an end of an utterance, an interaction between syllable position and a position of a phoneme with respect to a left edge of the phoneme enclosing word, an onset of an enclosing syllable, and a coda of an enclosing syllable. An inverse of the non-exponential functional transformation is applied to duration observations, or training data. Coefficients are generated for use with the generalized additive model. The generalized additive model comprising the coefficients is applied to at least one phoneme of the received text resulting in the generation of at least one phoneme having a duration. An acoustic sequence is generated comprising speech signals that are representative of the received text. The phoneme duration model may be used with the formant method of speech generation and the concatenative method of speech generation.
These and other features, aspects, and advantages of the present invention will be apparent from the accompanying drawings and from the detailed description and appended claims which follow.