Because of the widespread popularity of computers, most people have become comfortable with conventional computer user input devices such as keyboards and pointing devices. The keystrokes on a keyboard, and the cursor movement and control provided by mice, trackballs, and joysticks and other pointing devices are sufficiently intuitive to provide satisfactory interfaces for most computer-related tasks.
However, as computers become more commonplace throughout our environment, the desire to make computers and their interfaces even more user-friendly continues to promote development in this area. For example, the MIT Media Lab, as reported by Brygg Ullmer and Hiroshi Ishii in “The metaDESK: Models and Prototypes for Tangible User Interfaces,“ Proceedings of UIST 10/1997:14 17,” has developed another form of “keyboardless” human machine interface. The metaDESK includes a generally planar graphical surface that not only displays computing system text and graphic output, but also receives user input by responding to an object placed against the graphical surface. The combined object responsive and display capability of the graphical surface of the metaDESK is facilitated using infrared (IR) lamps, an IR camera, a video camera, a video projector, and mirrors disposed beneath the surface of the metaDESK. The mirrors reflect the graphical image projected by the projector onto the underside of the graphical display surface to provide images that are visible to a user from above the graphical display surface. The IR camera can detect IR reflections from the undersurface of an object placed on the graphical surface, to detect the object and its disposition.
Others have been developing similar keyboardless interfaces. For example, papers published by Jun Rekimoto of the Sony Computer Science Laboratory, Inc. and associates describe a “HoloWall” and a “HoloTable” that display images on a surface and use IR light to detect objects positioned adjacent to the surface.
By detecting a specially configured object or by detecting IR reflected light from an object disposed on a graphical display surface, the metaDESK can respond to the contemporaneous placement and movement of the object on the display surface to carry out a predefined function, such as displaying and moving a map of the MIT campus. Thus, computing systems such as the HoloWall and metaDESK may provide a more natural degree of human-machine interaction by providing the means for a computer to respond to specific objects in a defined manner. By facilitating a more natural input arising from the person's interaction with a graphical display, such technologies broaden and extend the manner in which a user might provide input to a computing system. This extended ability, however, does present some concerns, which are not necessarily unique to this form of user interaction with applications and an operating system. Indeed, graphical user interfaces often become crowded with icons used to invoke commands, functions, and applications. Thus, as a graphic user interface display screen becomes increasingly visually busy, it also becomes increasingly easy for a user to unintentionally invoke an unintended function. This problem can occur in regard to all types of user interactive displays.
FIG. 1A shows a computer display screen 100a displaying a conventional spreadsheet application window 110 and a conventional web browser application window 120. On application windows 110 and 120, there are numerous icons 102 that enable a user to initiate functions with a pointing device (not shown). The user can simply direct a cursor 104 to a selected icon and engage the function it represents by depressing a control button on the pointing device one or more times. Application windows 110 and 120 both include a familiar trio of icons in their upper right hand corners. These icons include minimize icons 112 and 122, maximize icons 114 and 124, and exit icons 116 and 126. Because of the close proximity of these icons to each other, even skilled users may, on occasion, inadvertently select and activate an undesired icon.
As shown in a screen 100b of FIG. 1B, for example, the user has selected exit icon 116 in spreadsheet application window 110. If the user's selection was unintentional, all of the data entered in application window 110 might be lost, which the spreadsheet application window closes. To safeguard against such loss of data, once any new data have been entered into application window 10, selection of exit icon 116 conventionally often causes a confirmation dialog box 130 to be presented to the user. Confirmation window 130 includes buttons enabling a user to confirm or retract a selection of exit icon 116 by selecting “yes” button 132 to exit after saving changes, “no” button 134 to exit without saving changes, or “cancel” button 136 to return to application window 110 without saving changes or exiting the application. A user accidentally selecting exit icon 116 most likely would choose cancel button 136 to rescind the unintended action and thereby avoid the undesired loss of data changed (i.e., entered, edited, or deleted) since the spreadsheet was last saved.
However, not all applications provide such safeguards. For example, a web browser, such as that shown by way of example in application window 120, might enable a user to exit the web browser application by selecting exit icon 126 without requesting confirmation from the user. Accordingly, as shown in a screen 100c of FIG. 1C, if a user moves cursor 104 to exit icon 126 in application window 120 and select the exit icon, application window 120 will be closed without requiring that user confirm the apparent decision to exit the browser application. Thus, if a user should inadvertently select exit icon 126 of browser application window 120 in screen 100c, the user might unexpectedly be presented with a blank screen. If the user was in the middle of writing a lengthy e-mail using a web-based e-mail service or had found a desired web page after an exhaustive search, with one errant mouse click, the user's work is lost. These types of problems can arise in almost any type of graphic user interface environment in which a user may make an unintended selection and thereby invoke a function that produces an undesired result.
While requiring confirmation of each function selected by a user as is conventionally done might solve the problem of inadvertent selection of a function by the user, it is simply not efficient to require that the user confirm the selection of all functions. Usually, only functions that can produce serious consequences are selected for confirmation, but even then, the user may tire of being required to confirm such a selection each time it is actually intended. Also, some functions, which may have less serious consequences, will simply add to the confusion and clutter of a graphic user interface display, if always visible to a user. Thus, it would be desirable to develop a different approach for dealing with these issues.