Since a mobile (embedded) environment is based on limited resources, technology for a light weight is demanded. Especially, when the mobile environment drives multimedia (hereinafter, simply referred to as “media”) including a single audio or video (image) or a combination thereof, a large amount of resources including power is consumed according to a characteristic of the mobile environment. Accordingly, a media framework, which is a fundamental environment, for supporting play, storage, or transmission of a media file, reception of real time Digital Multimedia Broadcasting (DMB), or support of a video conference (hereinafter, such a media processing work is collectively referred to as a “rendering”) serves an important role in the mobile (embedded) environment. Herein, a main consumer of the media framework is an Operating System (OS) platform builder or a media application developer, rather than a general end user, and the OS platform builder or the media application developer mainly develops the media application or selects a product based on the media framework.
In the meantime, a conventional media framework was not initiated based on a mobile (embedded) environment, but was developed based on an environment, such as a personal computer, having sufficient resources. Accordingly, generality and flexibility are important factors in enabling the media framework to accept a large number of components, and thus it is difficult to use the conventional media framework in the mobile (embedded) environment due to a significant characteristic, such as resource consumption. Especially, the conventional media framework has a characteristic of supporting other actions, such as a rendering, only after “media graph construction”, which means a connection between components necessary for rendering specific media, is completed. Here, the “component” refers to an independently functioning computer program for processing a media data stream, and each component may selectively have one or more input ports and output ports. The “media graph” refers to a set of one or more components added or connected for processing a media data stream.
FIG. 1 is a media graph for exemplarily illustrating a disadvantage of the conventional media framework.
As illustrated in FIG. 1, according to the conventional media framework, when the media graph includes six components, A, B, C, D, E, and F, the conventional media framework may execute the rendering after completely performing the connection between the components. When the conventional media framework performs the connection between the components, it performs a negotiation process in order to conform to a data type or a data buffer of media. For example, there are n data types for component A and m data types for component B, a total of n×m data type-related negotiations may be generated. An additional negotiation is progressed in order to conform to a size (horizontal and vertical sizes), a stride, etc., in a case of a video, in addition to the data type. Further, even though the conformation to the data type is completed, the connection is completed only after the negotiation for determining data buffer is progressed. Such a process is required for every component, so that the process is required for each of all 5 connections between the 6 components in the present example. Further, there may be a case in which the connection between the components in an uplink side (left side) is required again according to a characteristic of the connection between the components in a downlink side (right side) depending on cases.
Such a complex connection process is implemented with a processing and the consumption of resources, such as power, in the mobile (embedded) environment. There are some representative products of the conventional media framework technology, such as DirectShow of Microsoft Corporation and GStreamer based on an open source.
As described above, when the conventional media framework technology is applied to the mobile (embedded) environment, a plurality of negotiation processes are required to be repeated in the connection process for the media graph construction, so that the media graph construction becomes slow and resources including power are consumed.
In addition, since a media application (developer) is responsible for the media graph construction based on the conventional media framework, the application is required to check whether the media graph is properly constructed and whether an error is generated in the process of the media graph construction. Accordingly, the media application bears the burden of continuous monitoring of the media graph construction. As a result, the connection process has a problem in the unnecessary consumption of resources due to an unnecessary processing change, etc. in the mobile (embedded) environment, as well as the burden on the application developer.
Since a relevant hardware, such as a video decoder chip or an audio decoder chip, has been already determined in the mobile (embedded) environment, an OS platform builder may provide a preset such that the media graph may be constructed in accordance with a situation where the hardware has been already determined, and the media application selects the preset and demands the media graph construction. However, the conventional media framework technology cannot support the prompt connection.
Hereinafter, the problem of the conventional media framework technology will be described in detail by giving specific examples of Territorial-Digital Multimedia Broadcasting (T-DMB) reception and MPG file play.
FIG. 2 is a diagram illustrating a process of the media graph construction for reception of the T-DMB in the conventional media framework. As illustrated in FIG. 2, in order for the conventional framework to receive the T-DMB, a T-DMB receiver component 1 and a T-DMB demultiplexer component 2 are added and connected to each other in the first procedure P1, and a media type negotiation process and a buffer negotiation process are repetitively performed in every connection process. Since the broadcasting is not received from the T-DMB receiver component 1 before a play of the second procedure P2, the T-DMB demultiplexer component 2 cannot determine an output port (even though the output ports has been configured, the output port is in a connection disabled state). Accordingly, through the performance of the play of the second procedure P2, the T-DMB receiver component 1 receives T-DMB broadcasting data and transfers the T-DMB broadcasting data to the T-DMB demultiplexer component 2.
Then, the media application is required to repetitively monitor whether the T-DMB demultiplexer component 2 succeeds the analysis of the T-DMB broadcasting and generates the output port. However, such a repetitive monitoring is considered as a burden on the media application. Further, in this process, if the output port is not configured even after a predetermined time, the media application is required to perform an error processing. In this case, the media application is required to determine a duration required for the monitoring and the number of generated output ports required for the determination of the media graph construction as the success (a case where only the video output port is generated but the audio output port is not generated or the contrary, etc.—since the reception of the data in the wireless broadcasting is unstable, the media application is properly operated when a radio wave is strong, but the media application is not properly operated when the radio wave is unstable). Accordingly, the existing media framework has a problem of “shifting the responsibility of the media graph construction onto the media application”.
Referring to FIG. 2 again, when the output port of the T-DMB demultiplexer component 2 is normally generated, the media application performs a stop operation in the fourth procedure P4 in order to complete the media graph for a corresponding output port because “manipulation of the media graph is generally restricted during the play in the conventional framework technology”. Next, a media graph related to the video output is completed by sequentially connecting an AVC (Advanced Video Coding: MPEG-4 AVC/H.264 video codec standard) video decoder component 3 and a video output unit component 4 to the video output port of the T-DMB demultiplexer component 2 in the fifth procedure P5 after the stop operation. Then, a media graph related to the audio output is completed by sequentially connecting a BSAC (Bit-Sliced Arithmetic Coding: the audio codec standard used for the T-DMB) audio decoder component 5 and an audio output unit component 6 to the audio output port of the T-DMB demultiplexer component 2 in the sixth procedure P6.
When the desired media graph is completed through the aforementioned procedures, the media application performs a play in the seventh procedure P7 so that a user may watch the video and audio of the T-DMB broadcasting. As such, the media application is required to check in detail every procedure and whether an error is generated up to the completion of the media graph and the performance of the play operation. In the present example, a disconnection may be generated during the five procedures for the connection between the respective components, and especially it is necessary to consider even a case requiring the continuous monitoring of the generation of the output port of the third procedure P3 in a situation, such as broadcasting, where the reception strength is changed. Accordingly, the media application has a large burden in the conventional media framework and its burden further increases in the mobile (embedded) environment.
FIG. 3 is a diagram illustrating a process of the media graph construction for play of an MPG (MPEG: Motion Picture Expert Group) file in the conventional media framework. As illustrated in FIG. 3, for the play of the MPG file in the conventional media framework, an MPG file reader component 11 for reading data from the MPG file and an MPG file parser component 12 for analyzing the MPG file are added in the first procedure P11. Here, the MPG file parser component 12 is in a state of failing to determine an output port because it has not yet received the data.
Next, when the MPG file reader component 11 and the MPG file parser component 12 are connected in the second procedure P12, the MPG file reader component 11 reads partial data of the file and transfers the read data to the MPG file parser component 12 in the connection process, and the MPG file parser component 12 generates an output port according to contents of the analyzed data. Here, since the MPG file is a local file, the MPG file reader component 11 directly reads the data from the MPG file and provides the read data to the MPG file parser component 12 in the connection process contrary to a case of the broadcasting. However, when the data is not provided, the media application is required to perform the “play” operation, stop the play after a short time, and then construct the media graph for the output port as illustrated in the example of the T-DMB broadcasting. The present example is based on a situation where the video output port is generated because there is no audio data or there is an error in the MPG file. In this case, the media graph for the video output is completed by sequentially connecting an MPG video decoder component 13 and a video output unit component 14 in the third procedure P13.
Here, the media application is required to determine whether to process failure of the construction of the media graph for the audio as an error or a normal situation. When the failure of the construction of the media graph is processed as a success (normal) situation, the media application performs the play operation and outputs the video on a screen.
In this process, since the media graph for the audio is not completed, the media application is required to recognize the non-completion of the media graph for the audio and conduct a processing such that a configuration related to the audio is not made. In this case, it is necessary to additionally consider a processing of a user interface related to the audio, as well as a configuration of hardware related to the audio. As described above, the media application bears a burden of checking a result of the media graph construction and performing a relevant processing in the conventional media framework. Contrary to this, when only the media graph for the audio is constructed, but the media graph for the video is not constructed, the relevant processing becomes more complex. That is, since there is no video, the media application is required to fill a specific area with the screen, adjust its size, or the like, thereby having an increased burden.