The invention relates to speech recognizers, language translators, spelling checkers and other devices which generate and score word-series hypotheses. More particularly, the invention relates to a language generator having a language model for scoring word-series hypotheses in speech recognizers, language translators, spelling checkers and other devices.
Certain automatic speech recognition devices, automatic language translation devices, and automatic spelling correction devices have been known to operate according to the model ##EQU1## In this model, W is a word-series hypothesis representing a series of one or more words, for example English-language words. The term Pr(W) is the probability of occurrence of the word-series hypothesis. The variable Y is an observed signal, and Pr(Y) is the probability of occurrence of the observed signal. Pr(W/Y) is the probability of occurrence of the word-series W, given the occurrence of the observed signal Y. Pr(Y/W) is the probability of occurrence of the observed signal Y, given the occurrence of the word-series W.
For automatic speech recognition, Y is an acoustic signal. (See, for example, L. R. Bahl, et al. "A Maximum Likelihood Approach to Continuous Speech Recognition." IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume PAMI-5, No. 2, March 1983, pages 179-190.) For automatic language translation, Y is a sequence of words in another language different from the language of the word-series hypothesis. (See, for example, P. F. Brown, et al. "A Statistical Approach To Machine Translation." Computational Linguistics, Vol. 16, No. 2, June 1990, pages 79-85.) For automatic spelling correction, Y is a sequence of characters produced by a possibly imperfect typist. (See, for example, E. Mays, et al. "Context Based Spelling Correction." Information Processing & Management, Vol. 27, No. 5, 1991, pages 517-522.)
In all three applications, given a signal Y, one seeks to determine the series of English words, W, which gave rise to the signal Y. In general, many different word series can give rise to the same signal Y. The model minimizes the probability of choosing an erroneous word series by selecting the word series W having the largest conditional probability given the observed signal Y.
As shown in Equation 1, the conditional probability of the word series W given the observed signal Y is the combination of three terms: (i) the probability of the word series W, multiplied by (ii) the probability that the observed signal Y will be produced when the word-series W is intended, divided by (iii) the probability of observing the signal Y.
In the case of automatic speech recognition, the probability of the acoustic signal Y given the hypothesized word series W is estimated by using an acoustic model of the word series W. In automatic language translation, the probability of the sequence of words Y in another language given the hypothesized English-translation word series W is estimated by using a translation model for the word series W. In automatic spelling correction, the probability of the sequence of characters Y produced by a possibly imperfect typist given the hypothesized word series W is estimated by using a mistyping model for the word series W.
In all three applications, the probability of the word series W can be modeled according to the equation: EQU Pr(w.sub.1.sup.k)=Pr(w.sub.1)Pr(w.sub.2 .vertline.w.sub.1) . . . Pr(w.sub.k .vertline.w.sub.1.sup.k-1), [2]
where w.sub.1.sup.k represents a series of words w.sub.1, w.sub.2, . . . , w.sub.k.
In the conditional probability Pr(w.sub.k .vertline.w.sub.1.sup.k-1), the terms w.sub.1.sup.k-1 is called the history or the predictor feature. Each word in the history is a predictor word. The term w.sub.k is called the predicted feature or the category feature.
The mechanism for estimating the conditional probabilities in Equation 2 is called a language model. A language model estimates the conditional probabilities from limited training text. The larger the training text, and the larger the number of parameters in the language model, the more accurate and precise are the predictions from the language model.
In all three applications, the probability Pr(Y) of occurrence of the observed signal Y can either be modeled as a function of some parameters, or can be assumed to be independent of the word series W to be found. In the latter case, the term Pr(Y) is dropped out of Equation 1.