Graphics capabilities are commonly implemented in a number of electronic devices. Single-user computers such as desktop computers, laptop computers, handheld computers, and the like often use graphical displays for interacting with a user. Also, many digital video consumer electronics products, such as those for digital television, set-top box, and DVD applications, often use computer graphics capabilities to both display video streams and generate overlays, windows, menus and other displayable controls. Many consumer electronics products also provide graphic user interfaces (GUI's) much like those of personal computers, requiring the rendering of graphic lines, complex geometric shapes, and a multitude of colors and pixel formats, while also possibly being used for video resizing and display. Furthermore, some products may need to concurrently display multiple graphical components, e.g., various combinations of full-motion video display windows, internet session windows, GUTI and background graphic objects, windows, and/or digital images.
To generate complex displays, a number of different image processing techniques are often required. For example, image scaling, which expands or reduces the size of a graphic image, is often required to scale a display, or sometimes a region or individual graphic element on a display.
Many image processing techniques, such as image scaling, can be implemented entirely in software. The significant processing overhead associated with providing such graphics functionality such as image scaling, however, can often place a significant burden on the Central Processing Unit's (CPU's) of many electronic devices, as well as their memory subsystems and associated bus performance. In such instances, it may be desirable to off-load some or all of the graphics functionality required in a particular electronic device to a dedicated special-purpose graphics controller.
A graphics controller is typically a separate integrated circuit from a CPU, with special hardware incorporated into the circuit to assist a CPU in performing specialized graphics functions and operations. Among other functionality, a graphic controller often incorporates image scaling functionality to facilitate resizing images without overly taxing the CPU.
An image scaler is typically configured to expand or reduce the size of a two dimensional source image. In a conventional image scaler, a source image is fetched, operated on as desired for expansion or reduction, and stored as a resultant destination image. While some image scalers may provide only a fixed amount of scaling, more generalized image scalers are able to provide variable scaling based upon parameters supplied by a CPU. In addition, often the amount of expansion or reduction in the horizontal dimension can be separate and independent from that in the vertical dimension. Furthermore, in some instances no expansion or reduction may be performed in either or both dimensions, thus leaving the source data in one or both dimension(s) unchanged when stored to the destination image.
Typical image scalers employ fractional image scaling, where the amount of expansion or reduction in a given dimension is defined by a “Scale Factor”, which is mathematically defined to be the ratio of destination image size (in units of “number of pixels”) to the source image size (also in number of pixels), e.g., in terms of L/M, where L is the destination image size, and M is the source image size. The scale factor is usually expressed in a fraction or decimal number value. For example, in the horizontal dimension, for a source image that is 800 pixels wide, and where the destination image is scaled down to 640 pixels, then the horizontal scale factor may be represented as LH/MH=640/800=0.8=4/5. The vertical scale factor is typically defined in the same manner, but using source and destination image dimensions in the vertical direction.
A number of techniques have been developed to accomplish fractional image scaling. For example, some image scalers employ simple pixel dropping and duplication (or oversampling), for reduction and expansion, respectively. In the vertical dimension this is often referred to as line dropping and line duplication. In most instances, however, this method provides relatively crude and low quality results.
Another technique, interpolation filtering, multiplies input pixels to a filter by coefficients that are solely dependent upon the instantaneous position of the filter while processing the full source image pixel stream. With this technique, the exact position may fall between two input pixel locations at times. Interpolation filters have two or more inputs, and the sum of the coefficients is always one.
Yet another technique is symmetrical linear filtering, where the input pixels to the filter are also multiplied by coefficients that are fixed for a given range of scale factors. These filters usually possess an odd number of inputs (and hence an odd number of coefficients), and the coefficients are weighted to emphasize the center input pixel as the most prominent, such that the coefficients decrease on either side of the center input, or tap, in a linear and symmetrical fashion (i.e., straight line and mirror image). As with interpolation filtering, the sum of the coefficients is one.
Other techniques and algorithms for image scaling, exist, and combinations of these techniques may be used in some designs to provide better results than using any one technique alone.
A conventional image scaler typically employs several functional units that cooperate to provide image scaling functionality. A source image memory read unit is typically used to retrieve source image data and provide the data to a horizontal filter unit, which operates on the source image data to expand or reduce the data in a horizontal direction based upon the desired horizontal scaling factor. The horizontal filter unit then outputs horizontally-scaled data to a vertical filter unit to expand or reduce the data in a vertical direction based upon the desired vertical scaling factor. The vertical filter unit then outputs the data to a destination image memory write unit, which stores the data as destination image data. In some image scaler designs, an additional unit, an edge enhancement unit, may also be used to improve image quality when using relatively large scale factors. In addition, the memory read and/or write units may be configured to convert image data to different pixel formats, e.g., RGB32, RGB16, YCbCr, LUT8, etc.
Additional front end processing methods may also be used in some designs for larger expansion scale factors to reduce the “staircase” effects on angled and curved feature boundaries in images during expansions. These methods are typically applied ahead of any horizontal and vertical filtering. One such method is to factor out power-of-2 duplication factors from the horizontal and/or vertical scale factors, duplicate pixels and/or lines based on these new factors, and apply a selective directional filter for smoothing over an area (typically using un-weighted or weighted area sampling).
The horizontal and vertical filter units of an image scaler typically employ dedicated special purpose digital filters with associated multiply-add-normalize arithmetic elements to perform low-pass filtering. In addition, the image scaling mechanism for the horizontal dimension generally employs separate filter circuitry from that of the vertical dimension. These digital filters use multiple inputs consisting of adjacent and/or nearby pixels, either horizontally in one single line, or vertically spanning more than one line but from the same horizontal pixel location in each of those lines.
High performance hardware image scalers typically fetch pixel data from memory only once, thereby increasing image scaler processing speed (throughput) and reducing the memory subsystem and bus loading on the rest of the system. For the horizontal filter arithmetic elements, multiple horizontally adjacent or nearby inputs are needed simultaneously, and these are supplied via local registered buffering of data fetched sequentially from memory. In most designs, horizontally adjacent pixels of a source image occupy sequentially contiguous memory locations, and as such, source image data may often be retrieved sequentially from memory in bursts via the memory read unit, locally buffered in a first-in/first-out (FIFO) buffer, and supplied as needed to input registers in the horizontal filter unit.
Also in most designs, lines of source image data are arranged sequentially in memory, starting at the top line of the source image. As such, image data is typically read in from top to bottom, with the image data on each line supplied sequentially. The horizontal filter unit then processes the retrieved data to expand or reduce the number of pixels in each line. The resultant output pixel data from the horizontal filter unit, however, is of an intermediate result nature, and must be provided to the vertical filter unit to generate the final destination image pixels by combining the results for each line with the horizontal filter intermediate results from previous lines.
For the vertical filter unit arithmetic elements, multiple input pixel values are needed simultaneously from adjacent or nearby vertical positions (yet at the same horizontal position) to generate the final image scaler output pixel values that constitute the destination image. As a result, local buffering is typically required in the vertical filter unit to store the intermediate results generated by the horizontal filter unit. The buffered intermediate results typically take the format of multiple lines of source image data that has been expanded or reduced by the desired horizontal scaling factor.
Local buffering in the vertical dimension typically entails using line buffers to store entire horizontal lines of intermediate results from vertical image positions above the current line under operation. The vertical filter unit uses these line buffers to provide intermediate results from previous lines yet at the current intermediate image horizontal position as inputs along with the current intermediate results from the horizontal filter unit. For vertical filter units with n inputs or “taps”, typically n−1 line buffers are needed.
One limitation found with conventional image scaler designs, however, is that the horizontal size of a destination image is inherently limited by the size of the line buffers used in the vertical filter unit, as each line buffer is required to store all of the horizontally expanded/reduced data for a given line as output by the horizontal filter unit.
For higher resolution displays, e.g., with line widths of 1920 pixels or more, and with higher color depths, the memory requirements of “full-width” line buffers in a vertical filter unit can be substantial. Large line buffers often require a significant amount of circuitry, which occupies valuable real estate on an integrated circuit, and often results in increased chip size and cost. Also, for graphic controllers intended for use in power-sensitive designs (e.g., in battery powered electronic devices), the circuitry required to implement full-width line buffers often adds to the overall power consumption of the chip.
Furthermore, given the continually-increasing improvements in graphic and display technologies, display resolutions continue to increase, and thus require larger full-width line buffers to support the higher resolution displays. Increasing the size of the full-width line buffers in a vertical filter unit, however, adds additional circuitry to the design, thus further increasing chip size, cost and power consumption.
Therefore, a significant need has arisen for an image scaler design that avoids the limitations and drawbacks presented by the use of full-width line buffers in a vertical filter unit.