Conventional input/output (I/O) devices such as keyboards and mice do not adapt to an end user's needs or working habits in the sense that the I/O devices typically cannot adjust their physical shape in response to the user's interactive context. For example, while the functionality associated with particular keys on a conventional computer keyboard can be reassigned by software to a variety of different functions, the conventional keyboard remains a keyboard: it is not designed or enabled to dynamically change shape and transform (e.g., into a joystick) in response to the current usage context
Moreover, conventional I/O devices tend to occupy a significant amount of the user's available working space; thus, a keyboard may compete and conflict with a display over limited surface area. The space conflict is especially problematic when dealing with portable computing devices (e.g., laptop computers, personal digital assistants, and the like). Furthermore, while the various regions of a touch-enabled display screen can be dynamically reassigned to different functions, the physical shape of the display screen is conventionally fixed and remains a substantially flat surface. This results, among other limitations, in little or no meaningful tactile feedback for the user, and is less than optimal for many interactive applications.
Existing or proposed displays that can change shape out-of-plane (e.g., Braille displays) generally rely on individual actuators to control the out-of-plane position of individual display elements. This approach entails a large number of actuators, has performance limitations, and can be complex, unreliable, and costly.