Applications employ multiple displays driven by multiple image processing circuits, one example of which are graphics processors (GPU), each of which can include multiple displays. One of the applications is to display a single image amongst the multiple displays driven separately by the multiple GPUs. One implementation of the application is to employ a single memory buffer, which is called single large surface, to render and to store the digital image in the GPU frame buffer for display. Using a single large surface allows software to render a digital image in the same way as the single display configuration and without specific knowledge of the underline display and GPU connection topology.
Stated another way, in a computer system there can be multiple graphics processors (GPU) installed, each of the GPUs may or may not have its own display connectors, driving multiple display devices (monitors). A GPU drives its display outputs from buffers (primary surfaces) in video memory.
One known method, where multiple monitors are driven by a single GPU, uses a set of primary surfaces across multiple displays in the GPU's video memory. The GPU maps a partition of each primary surface to each of the displays as if the partition was a distinct surface. As such, images are presented from a single large surface encompassing all the partitions corresponding to all the monitors. However, this method is only applicable to apparatuses where a single GPU drives multiple displays, resulting in limited number of total displays supported by the apparatus. This method also has the limitation of supporting a single GPU as the rendering processor, resulting in limited processing performance provided by a single GPU instead of increased processing performance collectively provided by multiple GPUs, e.g. Alternate Frame Rendering by multiple GPUs.
In another known method using multiple GPUs, each GPU renders the image to its own memory buffer (e.g., primary surface) as a part of the single large surface. As such, the each primary surface is allocated separately in each GPUs frame buffer memory and is not sharable. The rendering program is required to recognize the GPU and display topology to separately render to each of the primary surfaces, and to compose the entire image using individual display controls in each GPU to synchronously display the image. This method requires complex software implementation that employs different programming logics according to different number of GPUs and the locality of the multiple displays.
If a graphics application wants to display a single frame of graphics/video content across multiple monitors, it is required to separately render to individual primary surfaces responding to multiple monitors. After all portions of a frame are finished with rendering, the application is required to display the composted entire frame from multiple buffers. Rendering and managing all if the primary surfaces requires specific knowledge about the display topology and handling logic in the executing application software.
When multiple monitors are driven by a single GPU, an existing solution to make applications transparent to the multiple monitors and the display topology is to use a single set of primary surfaces across multiple monitors in the GPU's video memory. The display driver maps a partition of the primary surface to each of the monitors as if the partition was a distinct surface. The applications are presented with a single large surface (SLS) encompassing all the partitions corresponding to all the monitors.
In addition, another possible prior art solution would be to use another multi-GPU rendering technique called SFR, or Split Frame Rendering. Each GPU is responsible for rendering a portion of the frame corresponding to the monitors connected to the GPU's local display connector. Every frame is collectively rendered by all GPUs. This solution is not a preferred multi-GPU rendering mode due to lower performance since all GPUs are involved in rendering every frame. Data required for rendering each frame need to be made available to all GPU every frame, causing large amount of data traffic across the GPUs via the interconnect bus. The sum of all GPUs' workloads can involve a large processing overhead.
Accordingly, there is a need to overcome one or more of the aforementioned drawbacks.