Microfiche Appendices
This application includes microfiche appendices of source code for a preferred embodiment, consisting of 2 sheets with a total of 128 pages.
This invention relates to speech recognition and more particularly to the types of such systems based on hidden Markov models.
Automatic speech recognition is a very difficult problem. For many applications, it is necessary to recognize speech in a speaker-independent manner (i.e., handle new speakers with no speaker-specific training of the system). This has proven to be difficult due to differences in pronunciation from speaker to speaker. Even speaker-dependent recognition has proven to be difficult due to variations in pronunciation in the speech of an individual.
Various hidden Markov model based speech recognition systems are known and need not be detailed herein. Such systems typically use realizations of phonemes which are statistical models of phonetic segments (including allophones or more generically phones) having parameters that are estimated from a set of training examples.
Models of words are made by concatenating the appropriate phone models. Recognition consists of finding the most-likely path through the set of word models for the input speech signal.
Known hidden Markov model speech recognition systems are based on a model of speech production as a Markov source. The speech units being modeled are represented by finite state machines. Probability distributions are associated with the transitions leaving each node, specifying the probability of taking each transition when visiting the node. A probability distribution over output symbols is associated with each node. The transition probability distributions implicitly model duration. The output symbol distributions are typically used to model speech characteristics such as spectra.
The probability distributions for transitions and output symbols are estimated using labeled examples of speech. Recognition consists of determining the path through the Markov chain that has the highest probability of generating the observed sequence. For continuous speech, this path will correspond to a sequence of word models. As background to this technology, reference is made to Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, " Proc. IEEE, Vol. 77, No. 2, Feb. 1989, the content of which is incorporated herein by reference. Other published references are cited hereinafter, the content of which is incorporate herein by reference to the extent needed to satisfy disclosure requirements. All material incorporated herein by reference from non-U.S.-patent documents is incorporated as background material only and is not essential material to an understanding of the invention.
Standard training and recognition algorithms for hidden Markov models are described in J. K. Baker, "Stochastic Modeling as a Means of Automatic Speech Recognition," PhD Thesis Carnegie-Mellon University Computer Science Department, April 1975, or in Levinson et al., "An Introduction to the Application of the Theory of Probabilistic Functions on a Markov Process to Automatic Speech Recognition," Bell Sys. Tech. Journal, vol. 62(4). April 1983.
A number of systems have been developed which use allophone network representations of alternative pronunciations of words, such as Bahl, L. R. et al., "Further Results on the Recognition of a Continuously Read Natural Corpus," IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 872-875, April 1980, and Cohen et al., "The Phonological Component of an Automatic Speech Recognition System," in Speech Recognition, R. Reddy, ed., Academic Press, New York, pp. 275-320. These networks typically are generated by the application of phonological rules to a set of baseforms, or standard or common pronunciations of words. A number of different phonological rule sets have been developed for use in speech recognition systems, such as Cohen et al., "The Phonological Component of an Automatic Speech Recognition System," in Speech Recognition, R. Reddy, ed., Academic Press, New York, pp. 275-320. Further, a number of different software systems have been developed to aid in the construction of phonological rule sets, such as Cohen et al., "The Phonological Component of an Automatic Speech Recognition System," in Speech Recognition, R. Reddy, ed., Academic Press, New York, pp. 275-320. These systems provide a means of specifying phonological rules and applying them to a dictionary of baseforms to create a representation of surface form pronunciations.
The major advantage of modeling alternative pronunciations with phone networks is that they explicitly represent linguistic knowledge about pronunciation. For hidden Markov model systems, this explicit representation of alternative pronunciations avoids the problem of averaging together different phenomena into the same model, resulting in a less precise model. Substitution, insertion, deletion, and changes in the order of phones can be explicitly modeled.
Phonological rules can be written to cover dialectal variation, fast speech reductions, etc. Generating these networks using rules and a baseform dictionary allows new vocabulary items to be added by composing or writing the appropriate baseform and allowing the alternative pronunciations to be derived automatically. Using rules and baseforms allows the expression of phonological knowledge in a form that linguists have used extensively in the past, and as such it seems to be a convenient notation for the expression of many general segmental phonological processes. Networks can be a compact representation of many alternative pronunciations.
In previous approaches using phone network representations of multiple pronunciations, rule sets have been developed without rigorous measures of what rules would be most important to the handling of the speech expected as input to a system and without a principled approach to refining the contextual specifications of rules in order to avoid overgeneration of pronunciations.
In previous approaches, the probabilities of the different pronunciations modeled have not been used because they are hard to estimate for a large vocabulary system, since most words in the vocabulary will not occur frequently enough in the training data to obtain accurate estimates. This can lead to new false alarms during recognition, especially in continuous speech recognition systems.
There is substantial structure to phonological and phonetic variation within and across speakers, and there exists a large scientific literature dealing with linguistic structure at these and other levels. In the PhD dissertation of one of the co-inventors (M. H. Cohen, Phonological Structures for Speech Recognition, University of California, Berkeley, completed April 1989, publication first available February 1990), the literature is reviewed, the significant structure to phonological variation is demonstrated, and methods are taught for using the structure. With the exception of this dissertation, there has been not been a teaching of how knowledge of this structure can be exploited to advantage in prior hidden Markov model (HMM) based speech recognition systems. It is a goal of the current invention to explicitly model certain aspects of phonetic and phonological structure in order to improve the performance of HMM systems and to extend the work undertaken in the course of the dissertation.
There are a number of significant limitations of current HMM systems which are in need of solution. Prior systems which model multiple pronunciations of words typically generate those models either by direct specification by an expert, or through the application of a set of phonological rules to baseform pronunciations of the words. The phonological rule sets have generally been developed without rigorous measures of what rules would be most important to handle the speech expected as input to a system. It is difficult to write rules which generate exactly those pronunciations that happen without ever generating pronunciations that do not happen. When writing many rules, in order to cover many of the pronunciations that do happen, the size of the representation can become prohibitively large, requiring much larger training data sets than are currently available in order to estimate the large number of parameters. In other words, the modeling techniques of past recognition systems have resulted in "bushy" networks, i.e., networks with many optional paths representing many possible pronunciations of each word, which causes problems in the statistically-based HMM systems.
Prior systems which model multiple pronunciations of words also typically do not use appropriate estimates of the probabilities of the various pronunciations modeled, since current training databases contain too few occurrences of all but the most common words to estimate probabilities reliably. This leads to false alarms due to the fact that pronunciations which are, in fact, unlikely, are represented as being as likely as other, more common pronunciations.
The pronunciation of words is affected by the surrounding words. In particular, the realization of word-initial and word-final phones may vary substantially, depending on the adjacent phones in neighboring words. These cross-word effects are not explicitly modeled in current HMM systems, thereby degrading recognition performance.
Moreover, prior systems which model only a single pronunciation of each word typically do not choose the best single pronunciation to use based on a large set of training data. This often leads to a poor choice of pronunciation model, which is likely to lead to recognition errors.
It is therefore desirable to provide a method for use in a continuous speech recognition system which facilitates the recognition of phones in context with surrounding phones.