Many of the tasks that a user must perform when using a computer relate to various spatial relationships between objects. For example, such tasks may involve moving, adding, deleting, instantiating, minimizing, etc. an object. Typical user interfaces require a user to perform the same kind of standardized input operations for all tasks, such as clicking a button, selecting an item from a menu, etc. While this framework may be useful to make a variety of tasks accessible by way of a few familiar input operations, it also forces the user to abstract their task into a general input operation.
For example, if the user has to click a button to move an application from a main workspace (e.g. desktop, etc.) to the bottom of an associated graphical user interface (e.g. taskbar, etc.) where they can quickly track all tasks, the user is forced to abstract such a task to something triggered by a button that is counterintuitive or unrelated, at best. Thus, if the user's initial concept of the foregoing task equates to pushing the application downwards, they must spend the time and mental energy to put aside such a notion and think about pushing a button, etc. This type of abstraction can potentially adversely affect a user's experience as well as their productivity and/or efficiency.
There is thus a need for overcoming these and/or other problems associated with the prior art.