Early speech recognition systems for computer users have provided basic dictation capabilities. These systems converted spoken words into written text. Often these systems were implemented as user applications run on top of the computer's operating system in cooperation with other user applications, such as word processing applications.
Later speech recognition systems sometimes included command and control functionality, in addition to dictation, by providing static, predefined operations. These operations enabled limited control of the user interface, such as starting applications and switching between applications.
With these legacy speech recognition systems, creating new voice commands requires knowledge of the speech recognition application programming interface (API) and extensive software development, such as C++ programming. The new operations would require custom developed software applications interfaced with the speech recognition API. Because the mammoth development effort required to create, update, and maintain new operations with these systems, providing personalized operations, tailored to the needs of the individual user, is impractical.