The growth in popularity of digital media has brought increased benefits and accompanying new challenges. While developments have continued to improve the digitization, compression, and distribution of digital media, it has become difficult to easily and effectively navigating through such media. Accessing media from a variety of sources of potentially different types for navigation increases the difficultly of navigation, as each of the sources may include a large number of media items including songs, videos, pictures, and/or text in a variety of formats and with varying attributes.
The sheer volume of digital media available, along with significant diversity in metadata systems, navigational structures, and localized approaches, makes it difficult for any single device or application user interface to meet the needs of all users for accessing the digital media.
In general, descriptive metadata ordinarily associated with the digital media may be unavailable, inaccurate, incomplete and/or internally inconsistent. When available, the metadata is often only available via textual tags that indicate only a single level of description (e.g., rock genre), and the level of granularity used in that single level may vary greatly even within a single defined vocabulary for the metadata. Often the level of granularity available is either too detailed or too coarse to meet the needs of a given user interface requirement. Additionally, for a given media object there may be multiple values available for a given descriptor type, and/or multiple values of the same type from multiple sources that may cause additional problems when attempting to navigate the digital media.
While the scope and diversity of digital media available to users has increased, the user interface constraints of limited screen size and resolution in some devices have remained effectively fixed. Although increases in resolution have been gained, the screen size being used to access some digital media has decreased as the devices have become more portable. Portable devices may include constraints that the length of lists and the length of the terms used in such lists remain short and simple.
At the same time, the application logic driving media applications such as automatic playlist engines, recommendation engines, user profiling and personalization functions, and community services benefits from increasingly detailed and granular descriptive data. Systems that attempt to utilize the same set of descriptors for both user interface display and application logic may be unable to meet both needs effectively at once, while using two completely separate systems risk discontinuity and user confusion.
In addition, different users may use different media navigation structures, labeling and related content when accessing content from different geographic regions, in different languages, from various types of devices and applications, from different user types, and/or according to personal media preferences. Existing navigation structures may not enable developers to easily select, configure and deliver appropriate navigational elements to each user group or individual user, especially in the case of an embedded device.
Traditionally, access to media available from such alternatives has been organized in a source or service paradigm. If a user desired to hear jazz music, they would either pre-select just a single device or source to browse, such as a local HDD, or drill in and out of the navigation UI for each device/source separately to view available jazz content.
Existing media navigation solutions have been more or less static in terms of not being able to respond dynamically to changes in a user's media collection or other explicit or implicit, temporary or long-term personal preferences. Likewise, there has been only limited capability for media navigation options to respond dynamically to real-time or periodic changes in other global and personal contextual data sources including those related to time, location, motion, orientation, personal presence, object presence.