A picture is worth a thousand words. The sentiment echoes throughout various aspects of our lives, and the world of computing is no exception. Since its inception, graphical user interfaces (GUIs) have become the standard, and preferred, way for millions of computer users to interact with their computers. Accordingly, more and more importance is being placed on the way in which computers visually show the user's actions and the environment in which the user may take these actions.
The typical GUI uses a number of onscreen graphical objects to visually represent functions, applications, files, menus, and a host of other features for the computing environment. The user typically uses a mouse input device to move an onscreen pointer to select a particular graphical object for action.
This GUI has proven effective, but a new step in the evolution of computing has revealed several drawbacks to existing GUIs. Specifically, the introduction of pen-based computing devices, such as the hand-held Personal Data Assistant (PDA) or the Tablet PC being introduced by Microsoft Corp., has changed the way we view the GUI, and the manner in which users can interact with their computers. This new approach to user interfaces has revealed problems and deficiencies in the traditional GUI described above. Examples of these problems will be discussed below.
One common use of computers and GUIs is to generate and edit electronic documents. These electronic documents can contain text (e.g., electronic word processors) and/or images (e.g., pictures), which are displayed on the user's screen for editing. To interact with these onscreen objects, the user typically uses a mouse input device to move an onscreen pointer to the desired object, and presses a button on the mouse to select the object.
The selection of the particular object may be reflected in a change in its appearance. For example, electronic word processing programs, such as the MICROSOFT WORD (R) program, may display text as shown in FIG. 16. In FIG. 16, the text “Sample text” appears in black on a white background. The text is arranged automatically in uniform rows of text across the user's screen, where the rows of text are assigned a predefined height based on user-defined settings (e.g., the use of 10 pt. font, the line spacing, etc.). Upon selecting these words using, for example, a click-and-drag motion of the mouse pointer over the words, their appearance may change to that shown in FIG. 17. In this figure, the actual selected text is now given a white color, and the rectangular area inhabited by the text in the row is given a black background that serves as a blocked selection highlight, identifying the selected text. The black blocked selection highlight occupies the entire row height, and serves to differentiate the selected text from the non-selected text.
Although this prior art approach to highlighting text works well in the uniform, line-by-line environment of traditional word processors, this approach is less desirable in other environments that allow a greater degree of freedom, such as pen-based computing devices. For example, in systems where the text is handwritten (e.g., on a personal data assistant using a touch-sensitive screen), the user may be permitted to write text above and below any such regimented lines. FIG. 18 shows an example of handwritten text that does not conform to any such lines. The “blocked” approach discussed above would result in some confusion as to what is actually selected.
A further drawback to this type of selection highlighting lies in the colors used for the selection block, and for the selected text appearing in the block. Although users of word processing programs are often provided with the ability to select font colors and/or background colors, they are not provided with the ability to select the color to be used for the selection block highlight, nor are they offered the ability to select the color of the selected text. Instead, these word processing programs automatically use predetermined colors for the selection block and selected text, which are often entirely different colors from the font color and background color. By using these new colors, these word processing programs may inadvertently obliterate any meaning that had been assigned to the original font color. For example, if a user were accustomed to using a red font color to identify text added by a certain person, and this color were changed upon selection of the text, the user might then be unable to tell whether the selected text was originally red or not.
The prior art approach to highlighting images, as opposed to the regimented text of traditional word processors, does not offer much of a better solution. FIGS. 19a,b show examples of how an image, such as a simple diagonal line, may appear when selected. If the line is a simple vector line created, for example, using a line drawing option available in Microsoft VISIO (r), the selected line's only change in appearance is the addition of selection handles 1901 at the endpoints. If the line is an image, such as a bitmap image, its appearance upon selection changes with the addition of a selection box with handles 1901 located around the periphery of the image. There are drawbacks to each of these approaches. In the former, the mere addition of handles 1901 does not clearly indicate the selected graphical object. In the latter, the selected line image may be identified by the surrounding box and handles, but there is much wasted white space attributed to this selected line. This white space is wasteful, as it obscures more visual “real estate,” or displayable area of the GUI, than is necessary.
Accordingly, there is now a need for an improved approach to identifying selected graphical objects in a GUI environment that can overcome one or more of the deficiencies identified above.