I. Field of the Invention
The present invention pertains generally to the field of communications, and more specifically to user interfaces for speech-enabled devices.
II. Background
Voice recognition (VR) represents one of the most important techniques to endow a machine with simulated intelligence to recognize user or user-voiced commands and to facilitate human interface with the machine. VR also represents a key technique for human speech understanding. Systems that employ techniques to recover a linguistic message from an acoustic speech signal are called voice recognizers. The term xe2x80x9cvoice recognizerxe2x80x9d is used herein to mean generally any spoken-user-interface-enabled device. A voice recognizer typically comprises an acoustic processor, which extracts a sequence of information-bearing features, or vectors, necessary to achieve VR of the incoming raw speech, and a word decoder, which decodes the sequence of features, or vectors, to yield a meaningful and desired output format such as a sequence of linguistic words corresponding to the input utterance. To increase the performance of a given system, training is required to equip the system with valid parameters. In other words, the system needs to learn before it can function optimally.
The acoustic processor represents a front-end speech analysis subsystem in a voice recognizer. In response to an input speech signal, the acoustic processor provides an appropriate representation to characterize the time-varying speech signal. The acoustic processor should discard irrelevant information such as background noise, channel distortion, speaker characteristics, and manner of speaking. Efficient acoustic processing furnishes voice recognizers with enhanced acoustic discrimination power. To this end, a useful characteristic to be analyzed is the short time spectral envelope. Two commonly used spectral analysis techniques for characterizing the short time spectral envelope are linear predictive coding (LPC) and filter-bank-based spectral modeling. Exemplary LPC techniques are described in U.S. Pat. No. 5,414,796, which is assigned to the assignee of the present invention and fully incorporated herein by reference, and L. B. Rabiner and R. W. Schafer, Digital Processing of Speech Signals 396-453 (1978), which is also fully incorporated herein by reference.
The use of VR (also commonly referred to as speech recognition) is becoming increasingly important for safety reasons. For example, VR may be used to replace the manual task of pushing buttons on a wireless telephone keypad. This is especially important when a user is initiating a telephone call while driving a car. When using a phone without VR, the driver must remove one hand from the steering wheel and look at the phone keypad while pushing the buttons to dial the call. These acts increase the likelihood of a car accident. A speech-enabled phone (i.e., a phone designed for speech recognition) would allow the driver to place telephone calls while continuously watching the road. And a hands-free car-kit system would additionally permit the driver to maintain both hands on the steering wheel during call initiation.
Speech recognition devices are classified as either speaker-dependent or speaker-independent devices. Speaker-independent devices are capable of accepting voice commands from any user. Speaker-dependent devices, which are more common, are trained to recognize commands from particular users. A speaker-dependent VR device typically operates in two phases, a training phase and a recognition phase. In the training phase, the VR system prompts the user to speak each of the words in the system""s vocabulary once or twice so the system can learn the characteristics of the user""s speech for these particular words or phrases. An exemplary vocabulary for a hands-free car kit might include the digits on the keypad; the keywords xe2x80x9ccall,xe2x80x9d xe2x80x9csend,xe2x80x9d xe2x80x9cdial,xe2x80x9d xe2x80x9ccancel,xe2x80x9d xe2x80x9cclear,xe2x80x9d xe2x80x9cadd,xe2x80x9d xe2x80x9cdelete,xe2x80x9d xe2x80x9chistory,xe2x80x9d xe2x80x9cprogram,xe2x80x9d xe2x80x9cyes,xe2x80x9d and xe2x80x9cnoxe2x80x9d; and the names of a predefined number of commonly called coworkers, friends, or family members. Once training is complete, the user can initiate calls in the recognition phase by speaking the trained keywords. For example, if the name xe2x80x9cJohnxe2x80x9d were one of the trained names, the user could initiate a call to John by saying the phrase xe2x80x9cCall John.xe2x80x9d The VR system would recognize the words xe2x80x9cCallxe2x80x9d and xe2x80x9cJohn,xe2x80x9d and would dial the number that the user had previously entered as John""s telephone number.
Conventional VR devices rely upon spoken user interfaces, as opposed to graphical user interfaces such as keyboards and monitors, to allow the user to interact with the VR device. The user interacts with the VR device by, e.g., making a telephone call, receiving a telephone call, or accessing features such as voice memo, voice mail, and email using spoken commands. The user""s input is captured using known VR techniques, and feedback to the user is provided via text-to-speech (TTS) or recorded prompts.
When the user speaks isolated words, such as a name to be called, which is stored in the memory of the VR device, or a command to be performed, such as commands to organize the phonebook, record and play voice memos, or send an email with the user""s speech as a voice attachment, the VR device uses isolated word recognition. Conventional VR technology is quite mature for isolated word recognition for up to approximately forty or fifty words. Hence, the processor and memory resources on a cellular telephone can be used to build an extremely accurate mechanism for spoken user input.
However, for the user to speak a telephone number and have the VR device call the number, the VR device would have to have continuous speech recognition (CSR) capability because people typically do not pause between the individual numbers as they recite a telephone number. The VR device must compare the captured utterance (the spoken telephone number) with ten to the power of N combinations of stored patterns (a ten-digit, speaker-independent vocabulary), where N is the number of digits in the telephone number. CSR technology is also required for the user to enter email addresses into the VR device using speech input. This requires even more processing and memory capabilities, as twenty-six to the power of N combinations must be compared with the captured utterance. CSR technology typically requires more processor and memory resources than isolated word recognition technology, thereby adding manufacturing cost to the VR device (e.g., a cellular telephone). Moreover, CSR technology does not provide a satisfactorily accurate mechanism for speech input, particularly in the noisy environments in which cellular telephones are typically used.
Hence, although most conventional VR products with spoken user interfaces for digit entry use speaker-independent CSR technology, when processor, memory, and/or battery power constraints prohibit the use of the CSR technology, the digit entry feature of the spoken user interface is typically replaced with a traditional keypad entry. Cellular telephone manufacturers, for example, typically use this approach, so that the user is prompted to enter a telephone number using the keypad. However, most users will not take the time and effort to enter a personal phonebook full of telephone numbers by hand, making individual voice tags for each number. Thus, there is a need for a mechanism that uses existing information to establish a user phonebook with voice tags in a VR device.
The present invention is directed to a mechanism that uses existing information to establish a user phonebook with voice tags in a VR device. Accordingly, in one aspect of the invention, a speech-enabled device advantageously includes at least one mechanism configured to enable a user to exchange information bidirectionally with the speech-enabled device; and logic coupled to the at least one mechanism and configured to prompt the user through the at least one mechanism, in response to occurrence of a user-defined event, to speak a voice tag to be associated with an entry in a call history of the speech-enabled device.
In another aspect of the invention, a speech-enabled device advantageously includes means for enabling a user to exchange information bidirectionally with the speech-enabled device; and means for prompting the user, in response to occurrence of a user-defined event, to speak a voice tag to be associated with an entry in a call history of the speech-enabled device.
In another aspect of the invention, a method of prompting a user to enter a voice tag into a telephone advantageously includes the steps of receiving a user-defined number of messages on the telephone from a particular source; and prompting the user to enter a voice tag associated with the particular source into the telephone after the receiving step has occurred.
In another aspect of the invention, a method of prompting a user to enter a voice tag into a telephone advantageously includes the steps of sending a user-defined number of messages on the telephone to a particular destination; and prompting the user to enter a voice tag associated with the particular destination into the telephone after the sending step has occurred.
In an exemplary embodiment of the invention, an email message may be sent to a telephone from a remote location, the email message being sent concurrently to at least one other email address in order to populate a phone book of the telephone with email addresses.
In another exemplary embodiment of the invention, an email message may be sent to a telephone from a remote location, the email message being copied concurrently to at least one other email address in order to populate a phone book of the telephone with email addresses.
In another aspect of the invention, a user interface for prompting a user to enter a voice tag into a telephone advantageously includes means for receiving a user-defined number of messages on the telephone from a particular source; and means for prompting the user to enter a voice tag associated with the particular source into the telephone after the user-defined number of messages from the particular source has been received.
In another aspect of the invention, a user interface for prompting a user to enter a voice tag into a telephone advantageously includes means for sending a user-defined number of messages on the telephone to a particular destination; and means for prompting the user to enter a voice tag associated with the particular destination into the telephone after the user-defined number of messages to the particular destination has been sent.