It is well understood that the proliferation of personal computers has revolutionized the very nature of computing. The personal computer brought computers out of climate controlled data centers of large corporations and into small businesses and homes. Well before the Internet became widely available, people began using computers on a daily basis for activities ranging from accounting and tracking personal finances, to word processing, to games.
In hindsight, part of what is remarkable about the personal computer revolution is that the early personal computers were not very user friendly. The human-machine interface on early machines typically consisted of a monochromatic display for presenting information to the user and a keyboard for entering data and giving commands to the computer. While personal computers were powerful tools, using keyboards to get the computers to perform desired tasks was not always straightforward, and certainly not always easy.
To initiate commands on earlier personal computers, users typically had to remember obscure keystroke combinations or, type commands and file names. For example, merely to retrieve a document or other object, a user had to remember the specific function key or other key string that should be pressed to initiate a retrieval command. With the command entered, the user either had to remember and key in the name of the data file desired, or, possibly review a listing of the names of documents available on a storage device until the desired data file was found. Even so, prior to the proliferation of graphical user interface operating systems, file names typically were limited to eight characters. Thus, merely trying to identify the desired file was not a simple matter.
Once a file was retrieved, the user was able to make changes to the file, but once again, the user typically had to remember the appropriate function keys or other keystrings required to initiate particular commands. Because of the numerous permutations and combinations of the SHIFT, ALT, CTRL, and function keys that might have to be used to enter commands in revising a document, users commonly relied upon keyboard overlay templates that literally listed all the available commands associated with each key or keystroke combination. Saving the revised document also required similar, non-user friendly processes.
Fortunately, the development of graphical user interfaces, such as provided by Microsoft Corporation's WINDOWS™ operating system, began a transformation of human-machine interaction. Improving processor and memory price-performance supported user environments where users were able to engage the computer with an intuitive pointing device such as a mouse, to point and click to select desired functions. The personal computer revolution took a dramatic step forward due to the power of such user-friendly interfaces.
In seeking to further improve the human-machine interface, ever-increasing hardware capabilities have made possible voice and speech recognition systems that avoid the need to enter text on a keyboard. Personal digital assistants and tablet PCs can now recognize human handwriting. Such hardware can thus provide a more efficient and satisfying experience for users who prefer not to type on a keyboard or are less proficient in doing so.
As computers become more ubiquitous throughout our environment, the desire to make computers and their interfaces even more user friendly continues to promote development in this area. For example, the MIT Media Lab, as reported by Brygg Ullmer and Hiroshi Ishii in “The metaDESK: Models and Prototypes for Tangible User Interfaces,” Proceedings of UIST 10/1997:14-17,” has developed another form of “keyboardless” human-machine interface. The metaDESK includes a generally planar graphical surface that not only displays computing system text and graphic output, but also receives user input by responding to an object placed against the graphical surface. The combined object responsive and display capability of the graphical surface of the metaDESK is facilitated using infrared (IR) lamps, an IR camera, a video camera, a video projector, and mirrors disposed beneath the surface of the metaDESK. The mirrors reflect the graphical image projected by the projector onto the underside of the graphical display surface to provide images that are visible to a user from above the graphical display surface. The IR camera can detect IR reflections from the undersurface of an object placed on the graphical surface.
Others have been developing similar keyboardless interfaces. For example, papers published by Jun Rekimoto of the Sony Computer Science Laboratory, Inc., and associates describe a “HoloWall” and a “HoloTable” that display images on a surface and use IR light to detect objects positioned adjacent to the surface.
By detecting a specially formed object or IR-reflected light from an object disposed on a graphical display surface, the metaDESK can respond to the contemporaneous placement and movement of the object on the display surface to carryout a predefined function, such as displaying and moving a map of the MIT campus. Ultimately, however, it would be desirable to expand upon this functionality, to enable a user to interact with a display surface with additional or other objects that make the use of a personal computer even more convenient. It would therefore clearly be desirable to enable ordinary objects to interact with a computing system. It would further be desirable to provide an even more intuitive, user-friendly method to engage a computing system via an interactive display surface using ordinary objects to control an application that is being executed.