The technology described herein relates to data processing systems and in particular to display controllers for data processing systems.
FIG. 1 shows an exemplary data processing system that comprises a central processing unit (CPU) 7, a graphics processing unit (GPU) 2, a video codec 1, a display controller 30, and a memory controller 8. As shown in FIG. 1, these units communicate via an interconnect 9 and have access to off-chip memory 3.
In use of this system, the GPU 2, video codec 1 and/or CPU 7 will, for example, generate surfaces (images) to be displayed and store them, via the memory controller 8, in respective frame buffers in the off-chip memory 3. The display controller will, for example, then read those surfaces as input layers from the frame buffers in the off-chip memory 3 via the memory controller 8, process the input surfaces appropriately and send them to a display 4 for display.
FIG. 2 shows an exemplary data path for the input surfaces for display in the display controller 30. It is assumed in this example that the display controller 30 can take as inputs for a given output surface to be displayed, a plurality of input surfaces (layers), and includes, inter alia, a composition engine (stage) 22 that is able to compose one or more input surfaces (layers) (e.g. generated by the GPU 2 and/or video codec 1) to provide a composited surface (frame) for display.
As shown in FIG. 2, the display controller 30 includes a DMA (Direct Memory Access) read unit 20 that reads data of input surfaces to be displayed and provides that data appropriately to respective layer processing pipelines 21 that perform appropriate operations on the received input surfaces before they are provided to the display composition stage 22, where they are composited into the desired composited surface for display.
The composited surface may be subjected to display timing control (e.g. the inclusion of appropriate horizontal and vertical blanking periods), and then provided to the display output interface of the display controller 30 for provision to the display 4 for display.
This process is performed for each frame that needs to be displayed, e.g. at a rate of 30 or 60 frames per second.
As shown in FIG. 2, the composited surface from the display compositor 22 may be subject to further processing before it is, e.g., displayed, in image processing stages (blocks) 23, 24, 25 and 26. These image processing stages may perform a variety of different image processing operations on the composited surface, such as, for example, scaling, sharpening, high dynamic range imaging (HDR), colour adjustment, brightness enhancement functions, other image quality enhancement functions and compression of the composited surface.
As shown in FIG. 2, these subsequent image processing stages may be internal to the display controller 30 as in the case of stages (blocks) 23, 25, or they may be connected to the display controller processing pipeline through special display controller interfaces as in the case of the image processing stage 24, or they may be outside the display controller itself and perform processing on an output data stream from the display controller 30 as in the case of the image processing stage 26, etc.
It would also be possible, for example, for the composited surface to be passed to an image processing stage via external memory (i.e. by writing the composited surface to external memory from where it is then read and processed).
Other arrangements would, of course, be possible.
Many of these image processing operations that may be performed on a composited surface that is to be displayed may use operations (so-called “window” operations) that use several input data positions (pixel positions) in the composited surface in order to determine the value of a given output data position (pixel) in the processed composited surface that is the output of the image processing operation. For example, an image processing operation may perform a filtering operation using a 3×3 pixel window to produce an output pixel value.
The Applicants have recognised that where image processing operations use plural input pixels to derive an output pixel value, then that can lead to errors around the borders between different input surfaces (layers) in a composited surface, because in those regions of the composited surface, the image processing function (e.g. 3×3 filter window) may take data positions (pixels) from different input surfaces (layers). This can then introduce “blur” between the different input surfaces (layers) of the composited surface after the image processing, because of “bleeding” of colours from one surface (layer) to another.
FIG. 3 illustrates this and shows an example composited surface 40 that is composited from two input surfaces (layers) 41, 42. As shown in FIG. 3, a 3×3 filtering window 43 is then applied to the composited surface 40 to produce a filtered version 44 of the composited surface 40. As shown in FIG. 3, the rows 45 at the boundary between the input layers 41 and 42 in the composited surface 40 are blurred, because of “bleeding” of colours from one layer to the other layer when the 3×3 filtering window 43 was applied to the composited surface 40.
The Applicants believe that there remains scope for improvements to the operation of display controllers that composite input surface (layers) to provide a composited surface, and in particular for arrangements that reduce or avoid blurring when performing processing operations on a surface composited from two or more input surfaces (layers).
Like reference numerals are used for like components throughout the drawings, where appropriate.