There are many applications where a group of elements need optimal coverage of their singletons. One such application is speech recognition optimization. A speech recognition system uses a specific set of commands for which recognition accuracy can be improved through supervised adaptation. During a supervised adaptation, a user will speak a list of commands which in turn will be used by the speech engine to update its model. For supervised adaptation, there is a need to provide an optimally ordered list to minimize the number of training commands necessary to fully cover the phonetic content of the overall system, and at any given time during the supervised adaptation, to ensure that the largest possible phonetic coverage has been trained.
In the case of unsupervised adaptation, where the speech engine is continuously adapting to a user's voice, there is need to limit the unsupervised adaptation to an optimized subset of commands to minimize risk of over-training, by stopping unsupervised adaptation when optimum set of commands has been met, and to accelerate adaptation based on a subset of commands only.