A personal computer 100 (FIG. 1A) includes a graphics processor 104 that generates a display of a three-dimensional (abbreviated as xe2x80x9c3Dxe2x80x9d) image on a screen 101 under the control of a central processing unit 105. Graphics processor 104 forms the displayed image from descriptions of one or more graphics primitives, such as a triangle 106 (FIG. 1B) that covers a picture element (called xe2x80x9cpixelxe2x80x9d) 107. The image displayed on screen 101 is typically formed by a two-dimensional array of such pixels, each of which has color.
Graphics processor 104 changes the color of pixel 107 depending on the location of triangle 106 relative to the center 108 of pixel 107. In the example illustrated in FIG. 1B, center 108 falls outside of triangle 106, and graphics processor 104 leaves the attributes of pixel 107 unchanged. However, when center 108 falls inside triangle 106, pixel 107 is colored the color of triangle 106. One pixel (e.g. pixel 113) being fully colored (triangle""s color) while an adjacent pixel (e.g. pixel 107) being not colored results in a defect noticeable to the human eye, in a problem called xe2x80x9caliasing.xe2x80x9d
The aliasing problem is reduced (or even eliminated) when fractional coverage of a pixel by a triangle (or other primitive) causes a change in the color displayed for the pixel by a corresponding fractional amount. Such fractional change of a pixel""s color requires obtaining multiple samples of colors for each pixel, in a process called xe2x80x9cmultisampling.xe2x80x9d For example, four colors (also called xe2x80x9cmultisamplesxe2x80x9d) can be produced from rasterization (a well known process), one for each of four locations A-D (FIG. 1C) within pixel 107 (instead of just one location 108 as described above).
Graphics processor 104 treats each multisample as a miniature pixel throughout the whole rendering process. At the point of display, processor 104 averages the four multisamples to obtain an averaged color, and displays the pixel at the averaged color (also called xe2x80x9cresolve colorxe2x80x9d). For example, a pixel 109 (FIG. 1D) has two multisamples covered by triangle 106 and two other multisamples covered by triangle 110. Therefore pixel 109 is displayed at an equal blend of the colors of triangles 106 and 110. For clarity, the four multisamples of each of various pixels (e.g. pixels 109 and 112) are not labeled. If triangle 106 is green and triangle 110 is red, then pixel 109 is displayed in a greenish-red color. Similarly, another pixel 112 (FIG. 1E) has three multisamples covered by triangle 106 and a fourth multisample covered by triangle 111, and is therefore displayed at a color that is three quarters red and one quarter black (if triangle 111 is black).
Another type of aliasing occurs when there is insufficient resolution (also called xe2x80x9cdepthxe2x80x9d) in the color signal used to display the image on screen 101 (FIG. 1A). For example, when the displayed image includes a color ramp going from left to right, with red on the left and green on the right, if there is sufficient color resolution the color of the displayed image changes gradually and smoothly along the horizontal axis. However, if there is insufficient color resolution, vertical bands of different colors are noticeable. Color resolution depends on the number of bits used in a signal used to identify the color, such as 24 bits (wherein each of red, green and blue signals are stored in 8 bits) or 16 bits (wherein red and blue signals are stored in 5 bits each and green signal is stored in 6 bits). At the minimum color resolution, the color signal has just 3 bits (1 bit for each of red, green and blue), and vertical bands are clearly noticeable when a color ramp is displayed.
If n multisamples are used for each pixel, and 24 bits are used to store the color signal for each multisample, each pixel requires 24*n bits (e.g. 96 bits when n is 4). The just-described examples 96 bits is an insignificant amount of memory if processor 104 only processes one pixel at a time. However, a significant amount of memory is required outside processor 104, e.g., in a frame buffer implemented by a DRAM. In a xe2x80x9ctiledxe2x80x9d architecture, screen 101(FIG. 1A) is subdivided into rectangular areas (called xe2x80x9ctilesxe2x80x9d), and processor 104 must process, at any one time, all the pixels in such an area. Such an area can be, for example, 32 pixels tall and 32 pixels wide, thereby requiring processor 104 to have a minimum of 32*32*96 bits (i.e. 12 KB) of memory. If the number of multisamples or the number of bits for color resolution is increased, the amount of memory required is also increased.
A graphics processor in accordance with this invention displays pixels in an image using signals having resolution that is non-uniform across the image. In one embodiment, the graphics processor uses signals having a first resolution (also called xe2x80x9chigher resolutionxe2x80x9d) in the interior of a surface in the image, and signals having a second resolution (also called xe2x80x9clower resolutionxe2x80x9d) at edges (also called xe2x80x9cdiscontinuitiesxe2x80x9d) of the surface. The just-described signals can be any signals in a graphics processor that indicate a predetermined attribute, such as color.
Use of higher resolution of a color signal in the interior of a surface eliminates color aliasing (also known as xe2x80x9cmach bandingxe2x80x9d) that would otherwise occur in the interior if the interior were displayed at the lower resolution. At the discontinuities, use of multisamples having lower resolution of color or even discarding color in favor of luminance is not noticeable to the human eye when a pixel obtained from such multisamples is displayed. Such use of signals having two or more widths (in the form of resolutions in this example) allows the graphics processor to use one or more multisample signals at a lower resolution than the prior art, thereby reducing hardware (e.g. memory locations required to store such signals, and lines required to route such signals).
In one embodiment of the invention, a processor (not necessarily a graphics processor) includes a resolution reducer and a resolution enhancer that respectively reduce and enhance the resolution (and therefore the number of bits) of a signal that is to be stored or transmitted within the processor. Specifically, the resolution reducer reduces the resolution of a high resolution signal to generate a low resolution signal while maintaining another high resolution signal unchanged. Thereafter, the processor performs in one or more intermediate circuits various actions (such as storage and/or transmission) on the high and low resolution signals. An example of an intermediate circuit is a memory that stores the high and low resolution signals in two storage circuits wherein one of the storage circuits has fewer number of storage locations than the other of the storage circuits.
Next, the resolution enhancer enhances the low resolution signal to generate a signal (called xe2x80x9cenhanced resolution signalxe2x80x9d) having the same number of bits as the high resolution signal. Thereafter, the processor uses the enhanced resolution signal in the normal manner, e.g. uses an enhanced color signal (that is obtained by enhancing a low resolution color signal) to display an image. An enhanced resolution signal of the type described herein is provided to any circuit that normally receives the high resolution signal, e.g. provided to a rendering stage in a pipeline of a graphics processor.
In one embodiment, a resolution reducer includes a truncator that simply drops a predetermined number of least significant bits (also called xe2x80x9clow order bitsxe2x80x9d) of a high resolution signal to generate the low resolution signal. Thereafter, the unchanged (high) resolution signal and the changed (low) resolution signal are both processed within the processor in a manner similar or identical to one another, e.g. both stored and/or both transmitted. Note that the low resolution signal of this embodiment can be directly displayed (in the normal manner) if necessary, without any further processing. Maintenance of a high resolution signal unchanged is a critical aspect of this embodiment, because the high resolution signal is used by the resolution enhancer (as discussed next) to generate the enhanced resolution signal from the low resolution signal.
Specifically, in this embodiment, a resolution enhancer receives the low resolution signal on a low resolution bus, and in addition also receives on a high resolution bus the high resolution signal that is normally stored or transmitted in a similar manner to the low resolution signal. Thereafter, the resolution enhancer passes to an enhanced resolution bus, as the enhanced resolution signal, the low resolution signal and the above-described number of least significant bits of the high resolution signal. That is, in this embodiment, the enhanced resolution signal is merely a concatenation (obtained by simply passing the to-be-concatenated signals to lines that are located next to each other) of the low resolution signal and a portion of the high resolution signal that together form the enhanced resolution signal.
Dropping least significant bits to form low resolution signals, and concatenating least significant bits from a high resolution signal to form enhanced resolution signals requires just lines, and no other circuitry. Specifically, such lines couple the least significant lines of the high resolution bus to the least significant lines of the enhanced resolution bus. In such an implementation, each of the resolution reducer and the resolution enhancer is devoid of any circuitry such as a logic element and a storage element. Therefore, the hardware required for implementing such a resolution reducer and a resolution enhancer is one or more orders of magnitude lower than the hardware required to implement a prior art method of compression and decompression.
In one implementation, a graphics processor changes (reduces and enhances) the resolution of one or more multisample signals (that are to be averaged prior to displaying a pixel in an image). When a pixel (also called xe2x80x9cinterior pixelxe2x80x9d) is entirely covered by a graphics primitive (such as a triangle), the low resolution signal (obtained after reducing the resolution) is exactly identical to a first number of most significant bits (also called xe2x80x9chigh order bitsxe2x80x9d) of the high resolution signal. In such a case, there is no loss of information in reducing and enhancing the resolution, because after enhancement the enhanced resolution signal is exactly identical to the high resolution signal (i.e. the enhanced resolution signal is exactly correctxe2x80x94there is absolutely no error whatsoever). Therefore, a signal obtained after averaging of the multisample signals remains identical to the high resolution signal. Such correctness of the enhanced resolution signal allows the graphics processor to display the interior pixel at the maximum resolution (equal to the total number of bits of the high resolution signal).
When a pixel (also called xe2x80x9cedge pixelxe2x80x9d) is only partially covered by the graphics primitive, at least one multisample signal is changed to the color of the graphics primitive, while at least another multisample signal remains unchanged. Therefore, the most significant bits of at least two multisample signals of such a pixel are different. After resolution reduction and enhancement, error is introduced in the least significant bits of the enhanced resolution signal (because the least significant bits are made equal during enhancement although originally these bits of the two multisamples were not identical). Such error is similar to noise. Specifically the graphics processor displays the edge pixel in exactly the same manner as an interior pixel, but the effective color resolution of the edge pixel is lower than the resolution of an interior pixel due to the just-described error. Note that there is no distinction in the two sets of acts that are performed to respectively display an edge pixel and an interior pixel. The number of least significant bits that are dropped during resolution reduction is predetermined to be sufficiently low to ensure that the human eye does not notice the difference in resolution at the edges in a displayed image, e.g. at junctions of one or more triangles. Therefore the error introduced by resolution reduction as described herein does not result in any noticeable artifacts in the displayed image (i.e. artifacts although present are imperceptible due to their location at the image""s edges).
Resolution reduction as described herein results in correct color at the maximum resolution for an interior pixel, and correct color only at a low resolution for an edge pixel. The resolution of color of a pixel in accordance with the invention is non-uniform, and changes depending on the location of a pixel relative to one or more surfaces in the displayed image. In one specific implementation, a graphics processor uses four multisample signals per pixel, wherein three of the four signals have their resolution reduced (and enhanced). In one implementation, all three low resolution signals are 16 bits wide, and the fourth (high resolution) signal is 24 bits wide. In this implementation, the resolution enhancer passes the same portionxe2x80x948 least significant bits (3 for red, 2 for green and 3 for blue)xe2x80x94of the high resolution signal to three enhanced resolution buses to form three 24 bit enhanced resolution signals.
Although in one embodiment the resolution reducer includes a truncator that just drops the predetermined number of low order bits, thus storing at least the most significant bit (MSB), alternative embodiments of the resolution reducer perform different or additional acts of compression to reduce the number of bits of one or more high resolution signals. In one alternative embodiment, the high resolution signal is mathematically transformed, e.g. by performing a logarithmic operation or by conversion from a first color encoding (e.g. RGB) to a second color encoding (e.g. luma color difference) and thereafter the resolution is reduced (e.g. by dropping color difference bits). In this embodiment, after the above-described dropping of bits to obtain three low resolution multisample signals, all four multisample signals are compressed in compression circuits by a well-known lossless compression method, such as JPEG lossless compression as described in Chapter 2 of the book entitled xe2x80x9cImage and Video Compression Standardsxe2x80x9d by Vasudev Bhaskaran and Konstantinos Konstantinides, Kluwer Academic Publishers, 1995 (see pages 15-51 that are incorporated by reference herein). In this embodiment, when the four multisample signals are decompressed in decompression circuits, the high resolution multisample signal is recovered unchanged, and is used to enhance the three low resolution multisample signals as described herein.
In another such alternative embodiment, instead of truncation (as described above), three of the multisample signals are compressed (in compression circuits) by a well-known lossy compression method, such as JPEG lossy compressions described in Chapter 3 of the above-described book (see pages 52-86 that are incorporated by reference herein). Depending on the implementation, the fourth multisample signal is compressed by a lossless compression method, or left as is. In either of the just-described implementations, the fourth multisample signal is recovered unchanged on decompression (in a decompression circuit), and is used as described herein.
In yet another such alternative embodiment, three multisample signals are compressed by a first lossy compression method (in compression circuits), while the fourth multisample signal is compressed by a second lossy compression method (in another compression circuit). The second lossy compression method preserves more resolution than the first lossy compression method, so that the fourth multisample signal (when uncompressed) has a medium resolution that is greater than the resolution of the three multisample signals (when uncompressed). In this embodiment as well, the fourth multisample signal can be used to enhance resolution of the three multisample signals. Note that in each embodiment described above, when all signals input to a resolution reducer are identical, all signals output by a resolution enhancer are also identical.
In one variant of the above-described embodiments, a first resolution enhancer is directly coupled to a resolution reducer by just lines in the processor. The first resolution enhancer allows transmission (on the lines) of one or more low resolution signals for each pixel (e.g. the above-described three low resolution signals and one high resolution signal). After the first resolution enhancer enhances resolution of the low resolution signals, the enhanced resolution signals are to be input to another circuit, such as a stage of the pipeline. In one variant of the just-described embodiment, an adder stage located downstream from the first resolution enhancer generates an averaged signal for display of an image on a screen.
In another variant of the above-described embodiments, a second resolution enhancer is coupled to the output terminals of a number of storage circuits (such as static random access memories abbreviated as SRAMs). The second resolution enhancer allows storage of one or more multisamples signals of each pixel at a low resolution (e.g. three low resolution signals and one high resolution signal). The second resolution enhancer enhances resolution of the low resolution signals whenever the memory is read. Therefore, in this embodiment, low resolution signals are stored in and retrieved from the storage circuits prior to receipt by the second resolution enhancer (for enhancement of the low resolution signals).
In one implementation, a graphics processor includes a resolution reducer, a memory coupled to receive one or more low resolution signals from the resolution reducer, a second resolution enhancer coupled to output terminals of the memory, a first resolution enhancer coupled to also receive one or more low resolution signals from the resolution reducer, and an adder stage coupled to the first resolution enhancer. Such use of two resolution enhancers reduces both transmission lines and memory size, and yet allows such a graphics processor to generate an image that does not have any noticeable artifacts.
Although the above-described embodiments require a resolution enhancer to enhance resolution of the low resolution signal, resolution enhancement is not required in an alternative embodiment. In one such alternative embodiment, a resolution reducer reduces the resolution (as described above), and one or more intermediate circuits process the high and low resolution signals (also as described above). Thereafter, the high and low resolution signals are used directly (i.e. without enhancement). In one variant, the signals represent multisamples in a pixel. In this variant, one or more low resolution color signals that are representative of only luminance, and a luminance portion of a high resolution color signal are combined in a blender (hereinafter xe2x80x9cluminance blenderxe2x80x9d) to obtain an average luminance for the pixel. Thereafter, the color portions of the high resolution color signal are used with the just-described average luminance to display the pixel. Therefore, resolution enhancement is not a critical aspect of the invention.
Although one example of the invention is implemented in a graphics processor, in other examples of the invention, the circuitry and method described herein are used in other processors. Examples of signals that may be processed in such other processors include signals for temperature and pressure.