The evolution of the computer industry is unparalleled in its rate of growth and complexity. Personal computers, for example, which began as little more than calculators having limited memory, tape-driven input capabilities and monochrome displays are now able to handle almost any data processing task with relative ease. While the ever-growing increase in computing power provides greater capabilities for application programmers and end users alike, the corresponding increase in complexity creates an ease of use problem. Consequently, computer systems designers are faced with a new challenge, namely to harness the available computing power in a form that is usable even by those with relatively little computer training, to ease the transition of users into a computer-based information paradigm.
In pursuit of this objective, various input/output philosophies, such as “user friendly”, “wysiwig” and “menu driven”, have become popular. These approaches to the input/output issue are particularly applicable to microcomputers, also known as personal computers, which are intended to appeal to a broad audience of computer users, including those who have no previous computer experience. An important aspect of computers which employ these input/output concepts is the interface which allows the user to input commands and data, and receive results. One particularly prevalent form of interface is known as the graphical user interface (GUI).
One popular type of GUI display is based on a visual metaphor which defines a monitor screen to be a workspace known as a “desktop”, in which the contents of documents are presented in relocatable regions known as “windows.” In addition to windows, the graphical user interface typically includes icons that represent various objects in a computer system. In this context, the term “object” refers to any software entity that exists in the memory of the computer and constitutes a specimen of a particular class. For example, an object can be a data file which contains the contents of a document. It can also be an application program or other type of service provider, such as a hardware driver. An object can also be a container for other objects, such as a folder or a window.
One of the advantages offered by the graphical user interface, in terms of making the computer easier to use, is the ability for the user to effortlessly manipulate objects by moving, or otherwise acting upon, their icon representations. For example, a graphical user interface typically includes a cursor, or a similar type of pointing device, that is controlled by the user to select objects. By actuating a button or key while the cursor is positioned over an icon, for example by clicking a mouse button, the user can select the object to perform an action upon it. If the icon represents an application program, the action might be to launch the program. If the icon represents a data file, the action might cause the file to be opened within the application program that was used to create it. Alternatively, the file can be copied, moved into a folder, deleted, or the like.
Another feature of graphical user interfaces that contributes to their ease of use is menus, which provide a simple, straightforward method for the user to view and choose commands that relate to an application running on the computer. In one popular format, a root menu is located in a menu bar at the top of the screen, immediately above the desktop. The menu bar contains a number of menu items which represent general categories of commands. If a user clicks a mouse button while the cursor is positioned over one of these items a “pull-down” menu appears, listing the commands available within that category. For example, a “File” category might contain commands that are appropriate to files as a whole, such as “open”, “close” and “print”. Another category labeled “Edit” might contain commands relating to the editing of objects, such as “copy”, “paste”, and the like. For further information relating to pull-down menus, reference is made to U.S. Pat. Re. 32,632, the disclosure of which is incorporated herein by reference.
Typically, when an action is to be performed upon an object, the user first selects the object, for example by clicking a mouse button while the cursor is positioned over the object, and then chooses the particular command to be performed on the selected object. Thus, if the user wishes to print a particular document, the user can first select the icon which represents that document, and then move the cursor to the menu bar and click upon the “File” category, to cause its pull-down menu to appear. The cursor is then scrolled down the menu until it is positioned over the “print” command, at which time the user actuates the mouse button to initiate the command to print the document.
While the graphical user interface has contributed significantly to the ease of use of computers, particularly the ability to carry out actions through a cursor and menus, it is desirable to further enhance the user experience. For example, as with any new environment, a certain amount of learning time is required for beginning users to discover which commands are applicable in a particular context, and the locations of these commands within the various pull-down menus under the root menu. Further in this regard, similar commands may not appear under the same headings in different, but related applications. For example, in a text document, the user may desire to change the margins. In one word processing program, the command for doing so may be located under a “format” category, whereas in another word processing application, it may be located under a sub-category labeled “document” or the like.
Another factor to consider in the user experience is the physical effort required by the user. One form of effort is number of mouse button clicks, or similar types of key actuations, that are needed. Typically, one or more clicks are necessary to select an item, and an additional one or more clicks are required to choose a command from the menu. Another form of effort is embodied in the distance that the cursor is required to travel throughout an operation. This distance is directly related to the amount of travel that is required by a mouse, or the amount of rotation that a user must impart to a trackball, or the like, from the first click of the selection to the last click of the command. All of these factors combine to form the total duration of the action that is typically required on the part of the user, beginning with the selection of an item, moving to the menu bar, and choosing a command.
On computer systems with large or multiple monitors, these actions can result in a rather lengthy process. For new users who must search for commands, the process can be even longer. In addition, for computer systems having relatively small cursor control devices, such as portable computers which employ a small trackball, lengthy cursor movements can prove to be cumbersome.
Therefore, it is desirable to provide a graphical user interface which makes it easier for users to discover the commands that are appropriate in a given context, as well as give the user a more efficient means for quickly executing the commands.