In conventional multimedia, the final displayed image output may be comprised of disparate media sources that are composited together to create the desired image output. In some cases the individual media source images are each created through processes which have distinct performance, power, and interactivity costs. For example, a media source can describe a final image through the computation of instructions that describes curves, paths and paint brushes. The latency of the composition process is typically related to the sum of the total latencies for processing each media source included in the final image. This latency is a key factor in determining the quality and responsiveness of the overall device experience. The lower the overall composition latency the more responsive the device will feel and the higher the latency, the more sluggish.
In traditional hardware accelerated graphics applications, multiple resolutions of the same texture are kept on a server or on attached media. The texture is then loaded into a graphics processing unit (GPU) at runtime. The GPU selects the most-appropriate texture size based on the size of the texture on the screen and various other quality metrics. In some composition scenarios, the texture is dynamic or created through other processes. For example, a texture can be a media source that describes a final image through the computation of instructions that describes curves, paths and paint brushes. These media sources can be evaluated at multiple resolutions such that, no matter what size they take on screen, they have a pixel-perfect definition. In some applications, the user has the option to zoom in on these textures, which requires recomputation of the media at an appropriate resolution for a pixel-perfect display. In such scenarios, a limiting factor is the amount of memory utilized by these external processes. It is impossible to compute and store all resolutions of each texture due to memory constraints.
Accordingly, what is needed is an efficient mechanism to quickly create the final images when needed. What is also needed are systems and methods for controlling the performance of image creation in a manner that utilizes scene-specific knowledge to determine media requirements for individual media sources, and then limiting processing to particular media sources with certain media requirements.