1.1 Field of the Invention
The present invention relates to systems and methods for controlling computer applications and/or processes using voice input. More precisely, the present invention relates to integrating a plurality of applications and/or processes into a common user interface which is controlled mostly by voice activated commands, which allows hands-free control of each process within a common environment.
1.2 Discussion of Prior Art
Speech input user interfaces are well known. This specification expressly incorporates by reference U.S. Pat. No. 6,606,599 and U.S. Pat. No. 6,208,972, which provide a method for integrating computing processes with an interface controlled by voice actuated grammars.
Typical speech driven software technology has traditionally been useful for little more than a dictation system which types what is spoken on a computer display, and has limited command and control capability. Although many applications have attempted to initiate command sequences, this may involve an extensive training session to teach the computer how to handle specific words. Since those words are not maintained in a context based model that simulates intelligence, it is easy to confuse such speech command systems and cause them to malfunction. In addition, the systems are limited in capability to the few applications that support the speech interface.
It is conventionally known that an application window can spawn another window when the application calls for specific user input. When that happens, we call the first window a “parent window”, and the spawned window a “child window”. This presents certain problems in that the child window generally overlaps its parent window.
Some child windows have to be satiated or terminated before releasing control (active focus) and returning I/O access back to the main application window. Examples of Child Windows are i) a Document window in an application like Word, ii) another foreground, monopolizing (aka Modal) window like File Open, iii) another foreground, non-monopolizing (aka Non-Modal) window.
Every speech-initiated application maintains its own operating window as a “child window” of the system. The child/parent window scheme does not allow for complex command processing. A complex command may require more than one application to be put to contribution in a specific order based on a single spoken command phrase. For example, the spoken command phrase “add Bob to address book” is a multiple-step/multiple-application command. The appropriate commands required by the prior art are: “open address book”, “new entry” and “name Bob”. In the prior art, each operation is required to be completed one by one in a sequential order. Although this methodology works to a minimum satisfaction level, it does not use natural language speech. The prior art is typically not capable of performing multiple step operations with a single spoken command phrase. In addition, the prior art does not enable a single spoken phrase to process commands that require the application to perform multiple steps without first training the application on the sequence of steps that the command must invoke (much like programming a macro). For example, the spoken command phrase “Write a letter to Bob” requires multiple applications to be used sequentially, and if those applications are not running, they must be launched in order to execute the command. The prior art would typically have the user say: “open address book”, “select Bob”, “copy address”, “open editor”, “new letter” and “paste address”—or would require the user to train the application to perform these steps every time it hears this command. The address book and text editor/word processor are generally different applications. Since these programs require the data to be organized in a specific order, the voice commands must be performed in a specific order to achieve the desired result. The prior art is not capable of performing operations across multiple applications entirely on its own with a single spoken command phrase.
In each Windowed Operating System it is common for each executing application window to “pop-up” a new “child window” when a secondary type of interaction is required by the user. When an application is executing a request, focus (an active attention within its window) is granted to it. Windowed operating systems running on personal computers are generally limited to a single active focus to a single window at any given time.
Current computer technology allows application programs to execute their procedures within individual application oriented graphical user interfaces (i.e. “windows”). Each application window program is encapsulated in such a manner that most services available to the user are generally contained within the window. Thus each window is an entity unto itself.
When an application window requires I/O, such as a keyboard input, mouse input or the like, the operating system passes the input data to the application.
Typical computer technologies are not well suited for use with a speech driven interface. The use of parent and child windows creates a multitude of problems since natural language modeling is best suited for complex command processing. Child windows receive active focus as a single window, and because they are sequentially activated by the operating system (single action), and as stated above, prior art speech command applications are not suited for natural language processing of complex commands.
The following US patents are expressly incorporated herein by reference: U.S. Pat. No. 5,974,413, Oct. 26, 1999, Beauregard et al.; U.S. Pat. No. 5,805,775, Sep. 8, 1998, Eberman et al.; U.S. Pat. No. 5,748,974, May 5, 1998, Johnson; U.S. Pat. No. 5,621,859, Apr. 15, 1997, Schwartz et al.; U.S. Pat. No. 6,208,972, Mar. 27, 2001, Grant et al.; U.S. Pat. No. 5,412,738, May 2, 1995, Brunelli et al.; U.S. Pat. No. 5,668,929, Sep. 16, 1997, Foster Jr.; U.S. Pat. No. 5,608,784, Mar. 4, 1997, Miller; U.S. Pat. No. 5,761,329, Jun. 2, 1998, Chen et al.; U.S. Pat. No. 6,292,782, Sep. 18, 2001, Weideman; U.S. Pat. No. 6,263,311, Jul. 17, 2001, Dildy; U.S. Pat. No. 4,993,068, Feb. 12, 1991, Piosenka et al.; U.S. Pat. No. 5,901,203, May 4, 1999, Morganstein et al.; U.S. Pat. No. 4,975,969, Dec. 4, 1990, Tal; U.S. Pat. No. 4,449,189, May 15, 1984, Feix et al.; U.S. Pat. No. 5,838,968, Nov. 17, 1998, Culbert; U.S. Pat. No. 5,812,437, Sep. 22, 1998, Purcell et al.; U.S. Pat. No. 5,864,704, Jan. 26, 1999, Battle et al.; U.S. Pat. No. 5,970,457, Oct. 19, 1999, Brant et al.; U.S. Pat. No. 6,088,669, Jul. 11, 2000, Maes; U.S. Pat. No. 3,648,249, Mar. 7, 1972, Goldsberry; U.S. Pat. No. 5,774,859, Jun. 30, 1998, Houser et al.; U.S. Pat. No. 6,208,971, Mar. 27, 2001, Bellegarda et al.; U.S. Pat. No. 5,950,167, Sep. 7, 1999, Yaker; U.S. Pat. No. 6,192,339, Feb. 20, 2001, Cox; U.S. Pat. No. 5,895,447, Apr. 20, 1999, Ittycheriah et al.; U.S. Pat. No. 6,192,343, Feb. 20, 2001, Morgan et al.; U.S. Pat. No. 6,253,176, Jun. 26, 2001, Janek et al.; U.S. Pat. No. 6,233,559, May 15, 2001, Balakrishnan; U.S. Pat. No. 6,199,044, Mar. 6, 2001, Ackley et al.; U.S. Pat. No. 6,138,098, Oct. 24, 2000, Shieber et al.; U.S. Pat. No. 6,044,347, Mar. 28, 2000, Abella et al.; U.S. Pat. No. 5,890,122, Mar. 30, 1999, Van Kleeck et al.; U.S. Pat. No. 5,812,977, Sep. 22, 1998, Douglas; U.S. Pat. No. 5,685,000, Nov. 4, 1997, Cox Jr.; U.S. Pat. No. 5,461,399, Oct. 24, 1995, Cragun; U.S. Pat. No. 4,513,189, Apr. 23, 1985, Ueda et al.; U.S. Pat. No. 4,726,065, Feb. 16, 1988, Froessl; U.S. Pat. No. 4,766,529, Aug. 23, 1988, Nakano et al.; U.S. Pat. No. 5,369,575, Nov. 29, 1994, Lamberti et al.; U.S. Pat. No. 5,408,582, Apr. 18, 1995, Colier; U.S. Pat. No. 5,642,519, Jun. 24, 1997, Martin; U.S. Pat. No. 6,532,444, Mar. 11, 2003, Weber; and U.S. Pat. No. 6,212,498, Apr. 3, 2001, Sherwood et al.