The development of voice applications can be extremely complex. The complexity is typically exacerbated by the lack of availability of a dedicated, standard or well-known development architecture. Although more recent voice application development has adopted portions of the Web programming model, there are still significant differences between voice and Web applications.
To create robust voice applications, it generally has been necessary for voice application developers to be familiar with many programming languages, techniques, architectures, and processes. Compounding this problem, voice applications are often built using proprietary markup languages. The emergence of the standard for VoiceXML has eliminated some of this complexity, allowing voice developers to better focus their skills. VoiceXML allows adoption of Web programming models for voice applications, and implementation through use of a server-side framework, similar to implementation to Web applications.
However, speech recognition still tends to be more error-prone than collecting data in a Web application, since background noise and other factors can interfere with the recognition. Unlike Web applications, voice applications require dialogs between the computer and user such as to confirm an input or re-prompt a user, when there has been no input. Voice applications rely on grammars to know what words or phrases are to be recognized.
Reusable Dialog Components (RDC's), such as those that can be implemented as JSP 2.0 tags, are known. RDC's can assist in developing voice applications in the same manner as in Web applications. RDC's include the voice-specific elements, such as the dialog, grammars, and call flow, needed to obtain units of information. The developer using the RDC does not need to know the grammar included, but rather only needs to understand the attributes that the RDC tag requires. Use of RDC's to handle the interactions for common dialogs can free a developer to deal with more complicated areas of the voice application.
However, the contemporary RDC framework has a limited ability to accept static grammars. In order to have voice applications that include dynamic grammars, a developer implements a solution specific to their environment and the data source being used. Custom code is necessary, which would defeat the purpose of having reusable components. This is especially cumbersome when options and data given to a caller needs to be gathered dynamically from a backend source.
A need therefore exists for a technique implemented in voice recognition systems wherein the above-mentioned disadvantages can be mitigated or alleviated. A further need exists for a system or process that provides for dynamic grammars for reusable dialogue components.