Many mobile computing devices (e.g., tablets, phones, etc.) use a pen, pointer, or stylus type input device (collectively referred to herein as a “pen type input device” or “pen”) in combination with a digitizer component of the computing device for input purposes. Typically, pen type input devices enable a variety of multi-modal pen, touch, and motion based input techniques.
Various conventional input techniques have adapted pen type devices to provide auxiliary input channels including various combinations of tilting, rolling, and pressure sensing. However, one of the limitations of many of these techniques is that they operate using sensors coupled to the computing device to sense and consider pen movements or hover conditions that are required to be in close proximity to the digitizer so that the pen can be sensed by the digitizer. Many such techniques operate in a context where the pen is used to perform various input actions that are then sensed and interpreted by the computing device.
For example, one conventional technique considers pen rolling during handwriting and sketching tasks, as well as various intentional pen rolling gestures. However, these pen rolling techniques operate in close proximity to the computing device based on sensors associated with the computing device. Related techniques that require the pen type input device to maintain contact (or extreme proximity) with the digitizer include various tilt and pressure based pen inputs. Various examples of such techniques consider separate or combined tilt and pressure inputs in various tablet-based settings for interacting with context menus, providing multi-parameter selection, object or menu manipulation, widget control, etc.
In contrast, various conventional techniques use an accelerometer-enhanced pen to sense movements when the pen or stylus is not touching the display. The sensed movements are then provided to the computing device for input purposes such as shaking the stylus to cycle through color palettes, and rolling the stylus to pick colors or scroll web pages. A somewhat related technique provides a pointing device having multiple inertial sensors to enable three-dimensional pointing in a “smart room” environment. This technique enables a user to gesture to objects in the room and speak voice commands. Other techniques use 3D spatial input to employ stylus-like devices in free space, but require absolute tracking technologies that are generally impractical for mobile pen-and-tablet type interactions.
Recently various techniques involving the use of contact (touch) sensors or multi-contact pressure (non-zero force) sensors on a pen surface have been used to enable various grip-sensing input scenarios. For example, stylus barrels have been developed to provide multi-touch capabilities for sensing finger gestures. Specific grips can also be associated with particular pens or brushes. Several conventional systems employ inertial sensors in tandem with grip sensing to boost grip pattern recognition.
Further, various conventional systems combine pen tilt with direct-touch input. One such system uses a stylus that senses which corners, edges, or sides of the stylus come into contact with a tabletop display. Thus, by tilting or rolling the stylus while it remains in contact with the display, the user can fluidly switch between a number of tools, modes, and other input controls. This system also combines direct multi-touch input with stylus orientation, allowing users to tap a finger on a control while holding or “tucking” the stylus in the palm. However, this system requires contact with the display in order to sense tilt or other motions. Related techniques combine both touch and motion for mobile devices by using direct touch to cue the system to recognize shaking and other motions of pen type input devices.