Digital video in general and high definition digital video in particular, are becoming widely used in a variety of display devices for educational, business and consumer entertainment applications. The cost of digital video processing continues to fall and as a result the benefits of digital manipulation of video signals over analog alternatives have become more compelling. Digital processing of video signals relies on a variety of techniques related to sampling and quantization, compression, encoding, modulation, error correction, post processing and the like; which are often used in complementary ways to achieve high quality video transmission or storage at reduced overall bandwidth or storage capacity requirements.
As a result, digital video is now fairly ubiquitous and can be found in a variety of computing devices such as personal computer workstations and laptop computers, and even in handheld devices such as cellular telephones, personal digital assistant devices, and portable music and video players.
In most digital devices that display images, a circuit responsible for graphics processing such as a graphics processing unit (GPU), an integrated graphics processor (IGP), a digital signal processor (DSP), or even a central processing unit (CPU) is used. When available, dedicated graphics processing circuits such as GPUs are often utilized by application software to process, composite and render digital images to interconnected displays. This is typically accomplished by providing graphics data and associated instructions to a graphics processing circuit, through an application programming interface (API) defined for that purpose. The use of a graphics API enables relatively high level application programs to take advantage of processing capabilities typically available on a graphics processor.
A graphics processor manipulates received graphics data, which may be representative of a three dimensional scene, and outputs a two dimensional image for viewing, using a sequence of stages collectively called a graphics pipeline. In some devices, these stages may include an input assembler, various shaders such as a vertex shader and a pixel shader and an output merger. Each stage may read/write its input/output data respectively, into buffers formed inside a video memory or frame buffer memory accessible to the graphics processor.
In a typical computing device, various stages of the graphics pipeline may have an associated API that exposes capabilities of the graphics processor that are suited for the stage to application software. As a result, multiple APIs are typically used by application software to render graphics.
Modern video data sources such as high definition digital television (HDTV) video sequences stored on Blu-ray or HD DVD discs often require video processing using a graphics processor. For example, Blu-ray discs typically contain multiple multi-layer video that requires blending or compositing multiple planes to form composited images for display. This typically requires multiple passes through the pipeline to compute the contributions of each plane in order to form the image to the target surface.
Unfortunately, such multi-stage multi-pass compositing may be inefficient, leading to multiple read and write operations that often result in the need for increased video memory bandwidth. In addition, multi-pass compositing often requires larger video memory sizes associated with intermediate buffers; and increased shader processing abilities to meet time constraints imposed by a particular output frame rate. Moreover, multi-stage, multi-pass compositing increases the power consumption of the graphics processor, and often requires a more complex application programming model involving difficult synchronization schemes to be implemented across stages.
Accordingly, there is a need for efficient for improved methods and devices for compositing video.