The technology described herein relates to display controllers for data processing systems.
In data processing systems, an image that is to be displayed to a user is processed by the data processing system for display. The image for display is typically processed by a number of processing stages before it is displayed to the user. For example, an image will be processed by a so called “display controller” of a display for display.
Typically, the display controller will read an output image to be displayed from a so called “frame buffer” in memory which stores the image as a data array (e.g. by internal Direct Memory Access (DMA)) and provide the image data appropriately to the display (e.g. via a pixel pipeline) (which display may, e.g., be a screen or printer). The output image is stored in the frame buffer in memory, e.g. by a graphics processor, when it is ready for display and the display controller will then read the frame buffer and provide the output image to the display for display.
The display controller processes the image from the frame buffer to allow it to be displayed on the display. This processing includes appropriate display timing functionality (e.g. it is configured to send pixel data to the display with appropriate horizontal and vertical blanking periods), to allow the image to be displayed on the display correctly.
Many electronic devices and systems use and display plural windows (or surfaces) displaying information on their display screen, such as video, a graphical user interface, etc. One way of providing such windows is to use a compositing window system, in which individual input windows (surfaces) are combined appropriately (i.e. composited) and the result is written out to the frame buffer, which is then read by the display controller for display.
It is becoming increasingly common for electronic devices and systems to be configured so as to be able to provide output images for display on plural display devices. It may be desired, for example, to provide output images to the system's local display and to an external display. The output images provided to the two displays may be the same, or may differ, for example the external display may require and use a different resolution and/or aspect ratio to the local display.
FIG. 1 shows schematically the operation of a conventional dual-display compositing media processing system. One or more input surfaces are generated by video codec 1 and/or GPU 2, and stored in main memory 3 (e.g. frame buffer 0, 1 and 2). The stored input surfaces are read by and passed to composition engine 4 which combines (composes) the input surfaces to generate a composited output surface (frame). In the illustrated example, the composition engine 4 can also perform colour space conversion operations on the input surface from video codec 1. The composited output surface is stored in main memory 3 (e.g. in frame buffer 3). The stored composited output surface is then read by the local display controller 5 and displayed on the system's local display 6.
The stored composited output surface is also read back in from main memory 3 by the composition engine 4, before being subjected to appropriate rotation and/or scaling so as to generate an appropriately rotated and/or scaled output surface for an external display 8 (which may require a different resolution and/or aspect ratio for output). The rotated and/or scaled output surface is stored in main memory 3 (e.g. frame buffer 4), before being read by a second display controller 7, and displayed on the external display 8.
FIG. 2 shows a conventional dual-display compositing media processing system. This comprises a central processing unit (CPU) 9, graphics processing unit (GPU) 2, video codec 1, composition engine 4, first display controller 5, second display controller 7 and a memory controller 10. As shown in FIG. 2, these communicate via an interconnect 11 and have access to off-chip main memory 3. The composition engine 4 generates the composited output frame from one or more input surfaces (e.g. generated by the GPU 2 and/or video codec 1) and the composited output frame is then stored, via the memory controller 10, in a frame buffer in the off-chip memory 3. The first display controller 5 then reads the composited output frame from the frame buffer in the off-chip memory 3 via the memory controller 10 and sends it to a local display 6 for display, and the second display controller 7 reads the composited output frame from the frame buffer in the off-chip memory 3 via the memory controller 10 and sends it to an external display 8 for display.
Conventional media processing systems can have limitations. For example, the number of surfaces (layers) that can be composited by the composition engine 4 may be limited (e.g. in the arrangement depicted in FIGS. 1 and 2, the composition engine 4 can only simultaneously handle one video layer and two graphics layers). Where it is desired to compose and display more surfaces than can be simultaneously handled by the composition engine 4, the graphics processing unit (GPU) 2 or composition engine 4 will typically pre-compose (or “flatten”) some of the surfaces before storing a pre-composited (“flattened”) surface in main memory 3. The composition engine 4 will then read the stored pre-composited surface together with the remaining input surfaces and combine the surfaces to generate a composited output surface (frame). The composited output surface is stored in main memory 3, and the stored composited output surface is read by the local display controller 5 and displayed on the system's local display 6.
In data processing systems in lower power and portable devices, the bandwidth cost of writing data to external memory and for the converse operation of reading data from external memory can be a significant issue. Bandwidth consumption can be a big source of heat and of power consumption, and so it is generally desirable to try to reduce bandwidth consumption for external memory reads and writes in data processing systems.
The Applicants believe that there remains scope for improvements to display controllers.
Like reference numerals are used for like components throughout the drawings, where appropriate.