1. Field of the Invention
The present invention relates generally to user interfaces for managing on-screen objects, and more particularly to a user interface that provides a consistent set of object management zones surrounding an on-screen object.
2. Description of the Background Art
Existing user interfaces provide many different techniques for moving, altering, controlling, and otherwise manipulating on-screen objects such as windows, images, text blocks, video, and the like. For example, the Windows XP operating system, available from Microsoft Corporation of Redmond, Wash., provides user interface mechanisms for manipulating various types of on-screen objects. Examples of such user interface mechanisms include:                application menus (e.g., click on an object to select it, and select an operation from an application menu);        on-screen buttons (e.g., click on an object to select it, and click a button to perform an operation on the object);        context-sensitive menus (e.g., right-click on an object and select an operation from a pop-up menu);        resize borders or handles (e.g., click-drag a window edge or object handle to resize the window or object); and        keyboard commands (e.g., click on an object to select it (or use a keyboard to navigate to an on-screen object), and hit a keystroke to perform an operation on the object).        
One problem with most existing techniques is that there is no consistent user interface paradigm for manipulating objects of different types. For example, the user interface for controlling text objects is significantly different from the user interface for controlling graphic objects. If a user wishes to resize a text object by increasing the text font, he or she performs an entirely different action than he or she would perform for resizing a graphic object. Accordingly, users must learn a variety of different manipulation methods, and know when to apply which method to which type of object. Often users become disoriented and confused when attempting to control certain types of objects, particularly when the user interface elements for the object being controlled differ from those to which the user has become accustomed.
Furthermore, existing techniques for activating certain object manipulation operations can be cumbersome, difficult to learn, or counterintuitive. For example, specialized on-screen objects, such as objects for representing time periods, often employ different user interface paradigms that may be unfamiliar to users.
In addition, many such techniques do not translate well from one input mechanism to another (such as pen-based, mouse, voice, and keyboard input mechanisms). Users switching from one input mechanism to another must often learn a new object control paradigm in order to use the software effectively.
What is needed, therefore, is a consistent, unified user interface paradigm for providing controls for manipulating on-screen objects, which addresses the limitations of conventional schemes. What is further needed is a user interface paradigm that is extensible and that facilitates ease of use and ease of learning, even when the user is attempting to manipulate different types of objects. What is further needed is a user interface paradigm that is usable with different types of input mechanisms, and that facilitates transitions from one input mechanism to another with minimal disruption, confusion, and re-learning.