The ways in which users may interact with a computing device continues to expand. For example, users initially interacted with computing devices using a keyboard. Cursor control devices were then introduced (e.g., a mouse) to support interaction with a graphic user interface.
A recent example of this expansion involves gestures. Gestures may be input in a variety of ways, such as detection of motion made by one or more fingers of one or more hands of a user by touchscreen or other functionality. However, gestures may suffer from a problem in that a user may not be made aware of what gestures are supported by the device. In other words, a user may be faced with having to “guess what to do” in order to engage in such interaction, which may be frustrating and limit the amount of functionality that is available to a user of the device.