The present invention relates generally to solutions for accomplishing user-friendly access to media files. More particularly, the invention relates to a system for accessing digital media files representing audio and/or visual information, and a corresponding method. The invention also relates to a computer program product configured to implement the proposed method, and a computer readable medium having a program recorded thereon, where the program is to make a data-processing apparatus control the proposed method.
Today, media data, e.g. files representing music, video and/or still images gain increasing importance. Inter alia, this is an effect of that modern portable media players have become relatively capable (i.e. are equipped with powerful processors, high resolution screens and resourceful storage devices). However, as a result, the media player user is confronted with large amounts of complex information, in which it may be intricate to navigate and difficult to find specific files, or groups thereof.
The published US patent application 2004/0175098 describes a personal media player capable of reproducing data in the form of video, audio and still images ported from sources outside of the player itself. A user interface here includes a display unit, and the user interface is functionally coupled to a digital media processing system. Thereby, a user may control the operation of the device. The display unit is also functionally coupled to the digital media processing system. Thus, The display unit may also functionally support and compliment the operation of the user interface by providing visual and audio output to the user during operation.
U.S. Pat. No. 6,877,134 discloses a solution for a digital capture system, such as a digital encoder, having an embedded real-time content-based analysis function to extract metadata from the digital signals. Hence, metadata (descriptive information about the digital content) may be produced in real-time during the encoding process. The metadata is either stored separately from the content, or is formatted and combined with the digital content in a container format, such as MPEG-7, QuickTime, or FlashPix. The metadata, in turn, generally falls into two broad categories referred to as collateral metadata and content-based metadata respectively. The collateral metadata may include date, time, camera properties, and user labels or annotations etc., while the content-based meta-data may include information extracted automatically by analyzing the audiovisual signal and extracting properties from it (e.g. key frames, speech-to-text, speaker ID, visual properties, face identification/recognition, optical character recognition (OCR) etc.).
Although the above-mentioned solutions represent efficient strategies to generate and reproduce various forms of digital media files, there is yet no satisfying technical solution for structuring large amounts of media information and presenting this information to a user in a straightforward and highly intuitive manner.