1.1 Field of the Invention
The present invention relates to systems and methods for controlling computer applications and/or processes using voice input. More precisely, the present invention relates to integrating a plurality of applications and/or processes into a common user interface which is controlled mostly by voice activated commands, which allows hands-free control of each process within a common environment.
1.2 Discussion of Prior Art
Speech input user interfaces are well known. This specification expressly incorporates by reference U.S. Pat. No. 6,606,599 and U.S. Pat. No. 6,208,972, which provide a method for integrating computing processes with an interface controlled by voice actuated grammars.
Typical speech driven software technology has traditionally been useful for little more than a dictation system which types what is spoken on a computer display, and has limited command and control capability. Although many applications have attempted to initiate command sequences, this may involve an extensive training session to teach the computer how to handle specific words. Since those words are not maintained in a context based model that simulates intelligence, it is easy to confuse such speech command systems and cause them to malfunction. In addition, the systems are limited in capability to the few applications that support the speech interface.
It is conventionally known that an application window can spawn another window when the application calls for specific user input. When that happens, we call the first window a “parent window”, and the spawned window a “child window”. This presents certain problems in that the child window generally overlaps its parent window.
Some child windows have to be satiated or terminated before releasing control (active focus) and returning I/O access back to the main application window. Examples of Child Windows are i) a Document window in an application like Word, ii) another foreground, monopolizing (aka Modal) window like File Open, iii) another foreground, non-monopolizing (aka Non-Modal) window.
Every speech-initiated application maintains its own operating window as a “child window” of the system. The child/parent window scheme does not allow for complex command processing. A complex command may require more than one application to be put to contribution in a specific order based on a single spoken command phrase. For example, the spoken command phrase “add Bob to address book” is a multiple-step/multiple-application command. The appropriate commands required by the prior art are: “open address book”, “new entry” and “name Bob”. In the prior art, each operation is required to be completed one by one in a sequential order. Although this methodology works to a minimum satisfaction level, it does not use natural language speech. The prior art is typically not capable of performing multiple step operations with a single spoken command phrase. In addition, the prior art does not enable a single spoken phrase to process commands that require the application to perform multiple steps without first training the application on the sequence of steps that the command must invoke (much like programming a macro). For example, the spoken command phrase “Write a letter to Bob” requires multiple applications to be used sequentially, and if those applications are not running, they must be launched in order to execute the command. The prior art would typically have the user say: “open address book”, “select Bob”, “copy address”, “open editor”, “new letter” and “paste address”—or would require the user to train the application to perform these steps every time it hears this command. The address book and text editor/word processor are generally different applications. Since these programs require the data to be organized in a specific order, the voice commands must be performed in a specific order to achieve the desired result. The prior art is not capable of performing operations across multiple applications entirely on its own with a single spoken command phrase.
In each Windowed Operating System it is common for each executing application window to “pop-up” a new “child window” when a secondary type of interaction is required by the user. When an application is executing a request, focus (an active attention within its window) is granted to it. Windowed operating systems running on personal computers are generally limited to a single active focus to a single window at any given time.
Current computer technology allows application programs to execute their procedures within individual application oriented graphical user interfaces (i.e. “windows”). Each application window program is encapsulated in such a manner that most services available to the user are generally contained within the window. Thus each window is an entity unto itself.
When an application window requires I/O, such as a keyboard input, mouse input or the like, the operating system passes the input data to the application.
Typical computer technologies are not well suited for use with a speech driven interface. The use of parent and child windows creates a multitude of problems since natural language modeling is best suited for complex command processing. Child windows receive active focus as a single window, and because they are sequentially activated by the operating system (single action), and as stated above, prior art speech command applications are not suited for natural language processing of complex commands.
The following US patents are expressly incorporated herein by reference: U.S. Pat. No. 5,974,413, 1999 Oct. 26, Beauregard et al.; U.S. Pat. No. 5,805,775, 1998 Sep. 8, Eberman et al.; U.S. Pat. No. 5,748,974, 1998 May 5, Johnson; U.S. Pat. No. 5,621,859, 1997 Apr. 15, Schwartz et al.; U.S. Pat. No. 6,208,972, 2001 Mar. 27, Grant et al.; U.S. Pat. No. 5,412,738, 1995 May 2, Brunelli et al.; U.S. Pat. No. 5,668,929, 1997 Sep. 16, Foster Jr.; U.S. Pat. No. 5,608,784, 1997 Mar. 4, Miller; U.S. Pat. No. 5,761,329, 1998 Jun. 2, Chen et al.; U.S. Pat. No. 6,292,782, 2001 Sep. 18, Weideman; U.S. Pat. No. 6,263,311, 2001 Jul. 17, Dildy; U.S. Pat. No. 4,993,068, 1991 Feb. 12, Piosenka et al.; U.S. Pat. No. 5,901,203, 1999 May 4, Morganstein et al.; U.S. Pat. No. 4,975,969, 1990 Dec. 4, Tal; U.S. Pat. No. 4,449,189, 1984 May 15, Feix et al.; U.S. Pat. No. 5,838,968, 1998 Nov. 17, Culbert; U.S. Pat. No. 5,812,437, 1998 Sep. 22, Purcell et al.; U.S. Pat. No. 5,864,704, 1999 Jan. 26, Battle et al.; U.S. Pat. No. 5,970,457, 1999 Oct. 19, Brant et al.; U.S. Pat. No. 6,088,669, 2000 Jul. 11, Maes; U.S. Pat. No. 3,648,249, 1972 Mar. 7, Goldsberry; U.S. Pat. No. 5,774,859, 1998 Jun. 30, Houser et al.; U.S. Pat. No. 6,208,971, 2001 Mar. 27, Bellegarda et al.; U.S. Pat. No. 5,950,167, 1999 Sep. 7, Yaker; U.S. Pat. No. 6,192,339, 2001 Feb. 20, Cox; U.S. Pat. No. 5,895,447, 1999 Apr. 20, Ittycheriah et al.; U.S. Pat. No. 6,192,343, 2001 Feb. 20, Morgan et al.; U.S. Pat. No. 6,253,176, 2001 Jun. 26, Janek et al.; U.S. Pat. No. 6,233,559, 2001 May 15, Balakrishnan; U.S. Pat. No. 6,199,044, 2001 Mar. 6, Ackley et al.; U.S. Pat. No. 6,138,098, 2000 Oct. 24, Shieber et al.; U.S. Pat. No. 6,044,347, 2000 Mar. 28, Abella et al.; U.S. Pat. No. 5,890,122, 1999 Mar. 30, Van Kleeck et al.; U.S. Pat. No. 5,812,977, 1998 Sep. 22, Douglas; U.S. Pat. No. 5,685,000, 1997 Nov. 4, Cox Jr.; U.S. Pat. No. 5,461,399, 1995 Oct. 24, Cragun; U.S. Pat. No. 4,513,189, 1985 Apr. 23, Ueda et al.; U.S. Pat. No. 4,726,065, 1988 Feb. 16, Froessl; 4766529, 1988-08-23, Nakano et al.; U.S. Pat. No. 5,369,575, 1994 Nov. 29, Lamberti et al.; U.S. Pat. No. 5,408,582, 1995 Apr. 18, Colier; U.S. Pat. No. 5,642,519, 1997 Jun. 24, Martin; U.S. Pat. No. 6,532,444, 2003 Mar. 11, Weber; and U.S. Pat. No. 6,212,498, 2001 Apr. 3, Sherwood et al.