1. Field of the Invention
The field of the invention is data processing, or, more specifically, methods, apparatus, and products for record disambiguation in a multimodal application operating on a multimodal device.
2. Description of Related Art
User interaction with applications running on small devices through a keyboard or stylus has become increasingly limited and cumbersome as those devices have become increasingly smaller. In particular, small handheld devices like mobile phones and PDAs serve many functions and contain sufficient processing power to support user interaction through multimodal access, that is, by interaction in non-voice modes as well as voice mode. Devices which support multimodal access combine multiple user input modes or channels in the same interaction allowing a user to interact with the applications on the device simultaneously through multiple input modes or channels. The methods of input include speech recognition, keyboard, touch screen, stylus, mouse, handwriting, and others. Multimodal input often makes using a small device easier.
Multimodal applications are often formed by sets of markup documents served up by web servers for display on multimodal browsers. A ‘multimodal browser,’ as the term is used in this specification, generally means a web browser capable of receiving multimodal input and interacting with users with multimodal output, where modes of the multimodal input and output include at least a speech mode. Multimodal browsers typically render web pages written in XHTML+Voice (‘X+V’). X+V provides a markup language that enables users to interact with an multimodal application often running on a server through spoken dialog in addition to traditional means of input such as keyboard strokes and mouse pointer action. Visual markup tells a multimodal browser what the user interface is look like and how it is to behave when the user types, points, or clicks. Similarly, voice markup tells a multimodal browser what to do when the user speaks to it. For visual markup, the multimodal browser uses a graphics engine; for voice markup, the multimodal browser uses a speech engine. X+V adds spoken interaction to standard web content by integrating XHTML (eXtensible Hypertext Markup Language) and speech recognition vocabularies supported by VoiceXML. For visual markup, X+V includes the XHTML standard. For voice markup, X+V includes a subset of VoiceXML. For synchronizing the VoiceXML elements with corresponding visual interface elements, X+V uses events. XHTML includes voice modules that support speech synthesis, speech dialogs, command and control, and speech grammars. Voice handlers can be attached to XHTML elements and respond to specific events. Voice interaction features are integrated with XHTML and can consequently be used directly within XHTML content.
In addition to X+V, multimodal applications also may be implemented with Speech Application Tags (‘SALT’). SALT is a markup language developed by the Salt Forum. Both X+V and SALT are markup languages for creating applications that use voice input/speech recognition and voice output/speech synthesis. Both SALT applications and X+V applications use underlying speech recognition and synthesis technologies or ‘speech engines’ to do the work of recognizing and generating human speech. As markup languages, both X+V and SALT provide markup-based programming environments for using speech engines in an application's user interface. Both languages have language elements, markup tags, that specify what the speech-recognition engine should listen for and what the synthesis engine should ‘say.’ Whereas X+V combines XHTML, VoiceXML, and the XML Events standard to create multimodal applications, SALT does not provide a standard visual markup language or eventing model. Rather, it is a low-level set of tags for specifying voice interaction that can be embedded into other environments. In addition to X+V and SALT, multimodal applications may be implemented in Java with a Java speech framework, in C++, for example, and with other technologies and in other environments as well.
Current multimodal applications support a voice mode of user interaction using a speech engine. A speech engine provides recognition and generation or ‘synthesis’ of human speech though use of an acoustic model that associates speech waveform data representing recorded pronunciations of speech with textual representations of those pronunciations, also referred to as ‘phonemes.’ Because most languages include sets of words that have the same pronunciation but have different spellings to distinguish each word's semantics, a set of phonemes representing a pronunciation may refer to more than one word in the language. A set of words having the same pronunciation, regardless of the words' semantics, are referred to as a ‘homophonic set.’ When a voice utterance specifying a word in a homophonic set is provided to the speech engine for recognition, therefore, the speech engine may return any one or all of the words in the homophonic set, but not necessarily the word intended by the speaker of the voice utterance.
In many multimodal applications, the multimodal application retrieves information for dynamic rendering from a database or other data repository. For example, the multimodal application may retrieve contact information for a user from the user's contact database. In such a database or other data repository, values for a particular attribute for multiple records may belong to the same homophonic set. When such an attribute is utilized as the key to select information from a data repository, the multimodal application cannot disambiguate between records to select the record desired by the user.