Software and hardware suppliers are consistently trying to develop new applications to make tasks, such as interfacing with a computational apparatus (i.e., a computer), both simpler and more efficient. Applications designed for interfacing with such apparatus that are presently available work solely in one environment, such as a traditional QWERTY keyboard or other mechanical pushbutton array or separately on a capacitive-sensing or other technology-based touch screen. These applications also require familiarity with a particular or provided interface, and often do not permit a user to benefit from his or her knowledge or preference (such as right handed or left handed operation, single handed operation, shortcut keystroke commands, touch and texture dependant interfacing, etc.) to manipulate and efficiently use the application.
Another problem exists where our most common form of input, the keyboard, has not adapted to the changing needs of computer users. The initial computers were considered word processors, the successor to the typewriter. As times evolved, computers have taken on more and more tasks that go well beyond typing, for which the keyboard was not designed. From media playback, to browsing the internet to digital creation where we manipulate and create images and photographs, produce films, draw and paint art. Typing/word processing is no longer the primary task of most computer users, comprising of only a small percentage of what we do on computers.
The computer has changed, but the keyboard hasn't. It is no longer the most appropriate interface for non-typing tasks, which is exemplified by the presence of extensive keyboard shortcuts. The mouse is very flexible to a GUI, but is slow and requires precise hand movement. Keyboards, while providing a faster form of input, lack the necessary flexibility that typical computing users' desire. The stop gap solution to this problem is to create keyboard shortcuts that permit a gain in speed, but at the expense of having to memorize letter-to-function shortcuts, a time and memory intensive task.
Additionally, keyboard shortcuts traditionally use letters as part of the combination keystroke commands that are determined phoenetically (i.e., I for in and O for out), which causes additional problems. These problems arise due to the keyboard having a QWERTY layout pattern and therefore no operational organization to make the command keystrokes easier for the user (based on location of the particular keys, when the keyboard is organized by letter). With a custom interface, these problems are eliminated and it permits a user to organize buttons by function and without requiring the user to memorize shortcuts.
Touch screen displays have gained common proliferation, for example, due to the release of several touch screen platforms, including the iPhone. The computing population are increasingly using a touch screen based device and learning a natural user interface (NUI) where one can see a button within a GUI and touch it to cause an action. Larger touch screen devices such as iPads have begun to take away some of the tasks traditionally done on laptops and desktops.
These interfaces often permit a direct link to the software, which presents certain drawbacks. For tasks such as image editing, for example, a user may be touching the screen and leaving smudges that obscure or decrease the visual quality of the image. In addition, the use of a user's hand or stylus will often block parts of the image while interacting with the display, which is less than ideal. Also, for professionals, having the input and the screen in the same plane on the same device has ergonomic disadvantages. For example, when operating a computer it is recommended that the top of the screen be at eye level and the input device, most often the keyboard and mouse at a height approximately 5 inches above the lap where the shoulders can relax and arms naturally bent at 90 degrees. This is impossible to achieve when the input and screen are one and the same, as with current touch screen devices.
A new form of interaction involves gestures in the free air as with, for example, Microsoft Kinect. At the current state of the technology, the advantage of a free air movement is outweighed by the disadvantage of a lack of tactile response. The current state of the art employs a movement-visual action-response system. One must match their movement with a solely visual feedback system. Our bodies have an amazing visual system, but still rely on a combination of sensory feedback for intuitive and natural interactions. For example, when grabbing a virtual box by extending your arm and closing your hand, the only feedback you have correctly grabbed it in the current state of the art is the corresponding visual cue (say an animation of a digital hand grabbing a digital box). You do not feel anything through the touch sensors. That feedback, it turns out, is crucial to intuitive interfaces. Additionally, waving at a computer display for an extended period of time is undesirable and causes fatigue.
There is also a shortcoming in the current state of the art with respect to user defined or user customized interfaces with such applications. Preferably, a user would have access to tools for creating customized interfaces in order to, by way of example but not limitation, create custom keystroke or movement based commands (i.e, physical gestures, speech or other sounds, cognizable eye movement, etc.) for communicating with the computational apparatus. Thus, there are several problems presently faced by those in the art with respect to available input devices for interfacing with computational apparatus.