A user interface (UI) of an application program often has hundreds or thousands of elements that when rendered may be combined in complex ways to provide a (hopefully) visually appealing user experience that is also straightforward and consistent to use. These elements are typically arranged as various menus of items from which an item may be selected for navigation (e.g., to another menu) or for taking an action with respect thereto (e.g., play a selected movie, or enter text) and so forth.
In general, a user interacts with a UI element having focus, such as a focused item within a menu, such as to select that item, input text and so on. A user also may interact at a higher level, e.g., at the menu level to change focus to another item, or to scroll items into and out of the menu, or at an even higher level to change focus to another menu's item, and so on.
In contemporary application programs, there are often several ways for a user to interact with the UI, including gamepad input (e.g., pressing the ‘A’ button on a game controller), keyboard (e.g., QWERTY) input, and media remote input (e.g., pressing a semantic button on a media remote control device, like the “Fast Forward” button). Other typical ways to interact include touch input (e.g., by pressing an on-screen button with a finger, or indirectly with a gesture-detecting/tracking device such as Kinect®), or to interact via mouse or other pointer input (e.g., clicking on a UI element with a mouse). Another interactive technique is voice/speech input, such as saying the word represented by text on a button while in a voice input mode, (e.g., “Play”).
The many varied types of input, when considered in combination with the many types of UI elements, can be confusing and time-consuming for a UI designer to handle correctly.