Modern data processing systems including wireless devices (e.g., personal digital assistants (“PDAs”), cellular telephones, mobile devices, global positioning system (“GPS”) receivers, etc.) are used for numerous applications such as electronic mail, voice and data communications, word processing, mapping, navigation, computer games, etc. In general, these applications are launched by the system's operating system upon selection by a user from a menu or other graphical user interface (“GUI”). A GUI is used to convey information to and receive commands from users and generally includes a variety of GUI objects or controls, including icons, toolbars, drop-down menus, text, dialog boxes, buttons, and the like. A user typically interacts with a GUI by using a pointing device (e.g., a mouse) to position a pointer or cursor over an object and “clicking” on the object.
One problem with these data processing systems and devices is their inability to effectively display detailed information for selected graphic objects when those objects are in the context of a larger image. A user may require access to detailed information with respect to an object in order to closely examine the object, to interact with the object, or to interface with an external application or network through the object. For example, the detailed information may be a close-up view of the object or a region of a digital map image.
While an application may provide a GUI for a user to access and view detailed information for a selected object in a larger image, in doing so, the relative location of the object in the larger image may be lost to the user. Thus, while the user may have gained access to the detailed information required to interact with the object, the user may lose sight of the context within which that object is positioned in the larger image. This is especially so when the user must interact with the GUI using a computer mouse, keyboard, or keypad. The interaction may further distract the user from the context in which the detailed information is to be understood. This problem is an example of what is often referred to as the “screen real estate problem”.
The screen real estate problem generally arises whenever large amounts of information are to be displayed on a display screen of limited size. Known tools to address this problem include panning and zooming. While these tools are suitable for a large number of visual display applications, they become less effective where sections of the visual information are spatially related, such as in layered maps and three-dimensional representations, for example. In this type of information display, panning and zooming are not as effective as much of the context of the panned or zoomed display may be hidden.
The screen real estate problem is most apparent in wireless devices having small display screens. In particular, wireless devices such as cellular phones, PDAs, and portable GPS navigation devices typically present usability challenges in making device functions efficiently and easily accessible, due to limited-sized displays and other device limitations such as small keyboards or small active input surfaces (e.g., touchscreens) for user input. Such problems are compounded by the increasing functionality of modern wireless devices, wherein new capabilities such as cameras, music players, and video players are being incorporated into these devices, making these devices increasingly complex. The end result is that the user typically faces difficulties in efficiently gaining a desired access to a particular device feature, or to particular content, while maintaining awareness of how to access other device capabilities or content.
Advances in detail-in-context presentation technologies (such as described in U.S. Pat. No. 7,106,349, which is incorporated herein by reference) show promise in dealing with display screen real estate challenges. Furthermore, the coupling of such technologies with animated transitions to new presentation states (such as described in U.S. patent application Ser. No. 10/989,070, which is incorporated herein by reference) and to touchscreen displays (such as described in U.S. patent application Ser. No. 11/249,493, which is incorporated herein by reference) or other means of user input may be of assistance in dealing with small device user interface problems. However, a need remains for an improved user interface for such devices.
A need therefore exists for an improved method and system for generating and adjusting detailed views of selected information within the context of surrounding information presented on the display of a data processing system. Accordingly, a solution that addresses, at least in part, the above and other shortcomings is desired.