This invention relates to speech recognition and more particularly to perform N-best search with limited storage space.
Speech recognition involves searching and comparing the input speech to speech models representing vocabulary to identify words and sentences.
The search speed and search space for large vocabulary speech recognition has been an active research area for the past few years. Even on the state-of-the-art workstation, search can take hundreds of times real time for a large vocabulary task (20K words). Most of the fast search algorithms involve multi-passes of search. Namely to use simple models (e.g. monophones) to do a quick rough search and output a much smaller N-best sub-space; then use detailed models (e.g. clustered triphones with mixtures) to search that sub-space and output the final results (see Fil Alleva et al. xe2x80x9cAn Improved Search Algorithm Using Incremental Knowledge for Continuous Speech Recognition,xe2x80x9d ICASSP 1993, Vol. 2, 307-310; Long Nguyen et al. xe2x80x9cSearch Algorithms for Software-Only Real-Time Recognition with Very Large Vocabulary,xe2x80x9d ICASSP; and Hy Murveit et al. xe2x80x9cProgressive-Search Algorithms for Large Vocabulary Speech Recognition,xe2x80x9d ICASSP). The first pass of using monophones to reduce the search space will introduce error, therefore the reduced search space has to be large enough to contain the best path. This process requires a lot of experiments and fine-tuning.
The search process involves expanding a search tree according to the grammar and lexical constraints. The size of the search tree and the storage requirements grow exponentially with the size of the vocabulary. Viterbi beam search is used to prune away improbable branches of the tree; however, the tree is still very large for large vocabulary tasks.
Multi-pass algorithm is often used to speed up the search. Simple models (e.g. monophones) are used to do a quick rough search and output a much smaller N-best sub-space. Because there are very few models, the search can be done much faster. However, the accuracy of these simple models are not good enough, therefore a large enough N-best sub-space has to be preserved for following stages of search with more detailed models.
Another process is to use lexical tree to maximize the sharing of evaluation. See Mosur Ravishankar xe2x80x9cEfficient Algorithms for Speech Recognition,xe2x80x9d Ph.D. thesis, CMU-CS96-143, 1996. Also see Julian Odell xe2x80x9cThe Use of Context in Large Vocabulary Speech Recognition,xe2x80x9d Ph.D. thesis, Queens"" College, Cambridge University, 1995. For example, suppose both bake and baked are allowed in a certain grammar node, much of their evaluation can be shared because both words start with phone sequence: /b//ey//k/. If monophones are used in the first pass of search, no matter how large the vocabulary is, there are only about 50 English phones the search can start with. This principle is called lexical tree because the sharing of initial evaluation, and then the fanning out only when phones differ looks like a tree structure. The effect of lexical tree can be achieved by removing the word level of the grammar, and then canonicalize (remove redundancy) the phone network. For example:
% more simple.cfg
start( less than S greater than ).
 less than S greater than - - -  greater than  bake| baked.
bake - - -  greater than  b ey k.
baked - - -  greater than  b ey k t.
% cfg_merge simple.cfg| rg_from_rgdag|  
rg_canonicalize
start( less than S greater than ).
 less than S greater than - - -  greater than  b, Z_1.
Z_1 - - -  greater than  ey, Z_2.
Z_2 - - -  greater than  k, Z_3.
Z_3 - - -  greater than  t, Z_4.
Z_3 - - -  greater than  xe2x80x9c xe2x80x9d.
Z_4 - - -  greater than  xe2x80x9c xe2x80x9d.
The original grammar has two levels: sentence grammar in terms of words, and pronunciation grammar (lexicon) in terms of phones. After removing the word level and then canonicalizing the one level phone network, same initial will be automatically shared. The recognizer will output phone sequence as the recognition result, which can be parsed (text only) to get the word. Text parsing takes virtually no time compared to speech recognition parsing.
It is desirable to provide a method to speed up the search and reduce the resulting search space that does not introduce error and can be used independently of multi-pass search or lexical tree.
In accordance with one embodiment of the present invention, an N-best search process with little increase in memory space and processing is provided by Viterbi pruning word level states to keep best path but also keeping sub-optimal paths for sentence level states.