In computer graphics, digital pictures are represented as a grid of individual color dots called “pixels.” The entire grid is called the “raster image” or simply the “raster.” The process of creating a raster image is called “rasterization” or “rasterizing” the image. The raster image displays a scene that typically includes one or more objects that are each comprised of sets of one or more pixels. The objects may be three-dimensional objects that are rendered for display in two dimensions. Several types of operations may be performed on the objects including, but not limited to, hidden surface removal, application of texture, lighting, shadows, and animation.
Although the raster image displays individual pixels, the characteristics of an individual pixel may be represented or evaluated by a grid of subpixels when performing graphical manipulations and calculations. As used herein, a “subpixel” is a portion of a pixel. For example, a graphics application may use an 8×8 subpixel grid to represent and evaluate each pixel.
The creation and manipulation of high quality images in computer graphics involves considerable computations, particularly if the entire raster image is being rendered. One trend in computer graphics for improving performance is to render only a portion of the entire image. For example, only a particular object, a particular set of objects, or a defined portion of a raster image is rendered, thereby using fewer computational resources. The re-rendering of only a portion of the raster image instead of rendering the entire raster image is referred to as “incremental updating” of the raster image. In certain cases, incremental updating may involve re-applying a particular type of effect, such as a change to the shading or lighting in the raster image, a change to a particular material within the raster image, or the re-rendering of just a defined portion of the raster image.
Although each pixel in a raster image can display only a single color, two or more objects or effects in a scene may geometrically cover a portion of the area represented by a pixel. The portion of each object or effect that covers some or all of the area represented by the pixel is called a “fragment.” Each fragment is typically associated with a set of information that is related to the fragment. For example, the set of information for a fragment typically includes, but is not limited to, the following types of information: the location of the fragment; the surface normal of the fragment at the location; the opacity of the fragment (e.g., how transparent the fragment is); a pointer back to the geometry of the object that the fragment is associated with, such that material properties and other features stored in data that is associated with the object may be identified with the fragment; and a sub-pixel mask that indicates which sub-pixels of a pixel are covered by the fragment.
A fragment may be “visible” or “not visible.” For example, an object that is placed in front of or on top of another object is visible, whereas an object that is behind or under another object is not visible. Each fragment may be “transparent,” for example, for shading or lighting effects, thereby allowing the transparent fragment to contribute to the visible image of the object. Transparent layers are “composited” together as part of determining the final color for the pixel that is displayed in the raster image.
Conventionally, a rendering application may employ a rendering engine that operates in two phases: (1) an initialization phase where a fragment buffer is created and (2) an update phase where the fragment buffer is scanned to determine shading and other effects for the raster image. The fragment buffer stores the fragment information for each visible fragment in the raster image.
Often, a raster image may depict jagged edges for objects that overlap other objects at a particular pixel. For example, when determining the color to be used in the raster image for a particular pixel, the color of the object having the largest fragment for the particular pixel may be used. However, a drawback with this approach is that it may result in a jagged or stair-stepped appearance of the edges of the object. This jagged edge effect is called “aliasing.”
Because of aliasing, another trend in computer graphics is the use of anti-aliasing techniques to remove the stair-stepped appearance of object edges. Certain anti-aliasing techniques include the application of an “anti-aliasing filter” or simply a “filter.” For example, by applying an anti-aliasing filter to the pixels of a particular object, the colors of adjacent objects along the particular object's edge are combined or blended with the color of the particular object such that the resulting raster image shows a smooth edge for the particular object instead of a jagged edge. The use of such anti-aliasing filters is referred to as “filtering,” and the object images that have been filtered are referred to as “filtered images”.
For example, FIG. 1 is a diagram that depicts a pixel grid 110 and a filter 130 that are centered on an output pixel 120. Filter 130 extends for two pixels on all sides of output pixel 120 producing a 5×5 block of pixels that is covered by filter 130, which in this case is a pyramid filter. Assuming there is an X-Y coordinate system with the output pixel at coordinates (10,10), the 5×5 block of pixels covers pixels in a block from (8,8) to (12,12). While FIG. 1 depicts a filter covering a 5×5 block of pixels, any type of filter covering one or more pixels may be used.
During filtering, each pixel in the 5×5 block is examined and all visible fragments are weighted by the “strength” of filter 130 at the particular distance from the center of filter 130. A filter's strength, or “weight,” at a particular location is a mathematical scalar value that is associated with a filter at the particular location. In FIG. 1, the strength of filter 130 at a given location is represented by the height of filter 130 above pixel grid 110 at the given location. The shape of filter 130 as depicted in FIG. 1 may be referred to as the “kernel” or “hat” of filter 130.
Colors in computer graphics are typically represented by a triple of scalar values, each in the range between 0.0 and 1.0. The triple is organized as ( r, g, b ) where “r” refers to the red component, “g” to the green component, and “b” to the blue component. In this representation, white is (1, 1, 1), black is (0, 0, 0), full red is (1, 0, 0), dark blue-green is (0, 0.2, 0.2), etc.
For example, assume a pixel 122 that is located at coordinates (11,9) has only the color red, which is represented as (1, 0, 0). If the strength of filter 130 at pixel 122 is 0.2, then the weighted color of pixel 122 is (0.2, 0, 0) (e.g., 0.2×1.0=0.2; 0.2×0.0=0.0; 0.2×0.0=0.0). A weighted color for each of the other pixels covered by filter 130 in pixel grid 10 are similarly generated by multiplying the color of each pixel by the strength of filter 130 at the location of the pixel.
While FIG. 1 depicts a filter with weighting that is based on a pyramid shape, other filters that are associated with different weightings may be used, such as a flat square, a circular cylinder, a curved shape based on a cubic spline, or virtually any other specified shape. Generally, filters are center-dominant such that pixels that are closer to the center of the filter have a greater weight than those pixels further away from the center of the filter, resulting in a peaked shape such as that of filter 130. However, other filters that give more weight to farther pixels than closer pixels or filters that have a uniform weighting scheme may also be used, and filters that are not center-dominant may be used. Some filters may include both positive and negative weights.
Conventionally, to determine the final color for output pixel 120, the weighted colors for the pixels covered by filter 130 are summed together. The values of filter 130 are typically normalized so that the sum of the weighted colors for the pixels under the hat of filter 130 represents the final color for output pixel 120. For example, if a filter covers a 3×3 block of pixels and the filter is uniform, the weight for each pixel is 1/19, or 0.1111, and the sum of the weights for the pixels is 1 (e.g., 9×0.1111=1). Alternatively, the filter values may not be normalized initially, and therefore the final summed color is normalized based on the sum of the initial filter values to determine the final color for an output pixel. Generally, filter values typically range from −1 to 1, although other ranges may be used as well.
The process of applying weights to the colors and summing them together is called “convolution.” Convolution may be done at a variety of levels of resolution, such as at the pixel level or at the subpixel level. Generally, the lower the level of resolution, the higher the quality final raster image, although the higher quality is achieved at the expense of more calculations.
Although the example of FIG. 1 is described in terms of objects having a particular color, in general, each pixel or subpixel may be covered by several fragments that contribute to the pixel's final appearance. For example, when portions of more than one object is visible within a pixel, there will be one fragment for each object within the pixel, and the sub-pixel masks will resolve the relative occlusion between the objects. The final appearance of a pixel is then “gathered” from all of the fragments in each of the pixels covered by the filter.
FIG. 2 is a diagram that depicts pixel grid 110, a red object 240, a white object 250, and an edge 260 between red object 240 and white object 250. In the example of FIG. 2, edge 260 is oriented such that half of output pixel 120 is covered by red object 240 and half of output pixel 120 is covered by white object 250. When filter 130 is applied to red object 240, each pixel in red object 240 is filtered using a 5×5 block of pixels centered on each pixel. For example, all of the pixels in the 5×5 block from (8,8) to (12,12) will be used to calculate the final color of output pixel 120 at (10,10). Because the 5×5 block encompasses pixels within both red object 240 and white object 250, when the 5×5 block is near edge 260, the final color of output pixel 120 will be influenced by both the colors red and white.
Further, each pixel will affect neighboring pixels when the neighboring pixels are filtered. For example, the original red half and the original white half of output pixel 120 at (10,10) will affect the color of pixels up to two pixels away when the filter is applied to those pixels. For example, when a pixel 124 that is located at (12,10) is filtered, the filter centered over that pixel covers the 5×5 block (10, 8) to (14, 12) and output pixel 120 at (10,10) falls within the block. Conversely, pixel 124 and any pixels that are filtered in the 5×5 block within red object 240, from (10,8) to (12,12), will be somewhat affected by the original red half and white half of the output pixel 120 at (10,10).
By using filter 130, the colors from the objects bordering edge 260 become mixed or blended, resulting in a smoother, higher quality border around red object 240 that reduces the jagged edge appearance that typically occurs when anti-aliasing filters are not used. After red object 240 is filtered, the new filtered colors for each pixel are retained and the original colors for each pixel and subpixel are discarded.
Filter 130 may be applied at any of a variety of resolutions, such as that of the pixels of pixel grid 10 or of a subpixel grid within each pixel, such as a subpixel grid 230 depicted in FIG. 2 as having a 4×4 grid of subpixels. Although the use of a subpixel grid will require more computations, the final raster image will have a higher quality than if only a single pixel level of resolution is used.
FIG. 3A is a diagram that depicts a scene 310 that has been filtered. Scene 310 includes a red box 320 that is positioned on a white table 330. An edge 340 shows the border between red box 320 and white table 330. As a result of applying a filter to red box 320, edge 340 appears as a smooth edge instead of jagged edge.
FIG. 3B is a diagram that depicts a magnified view of a portion of edge 340 of red box 320 after application of the filter to red box 320. FIG. 3B depicts the color of individual pixels that are represented by small squares. The squares along edge 340 show various blended colors, such as lighter shades of reds and some pinkish colors, between the red color of red box 320 and the white color of white table 330, which results in the smooth appearance of edge 340 in FIG. 3A.
Conventionally, a problem arises with using a graphics application to incrementally update a selected object when using anti-aliasing filters, namely that a “halo”effect often appears when the appearance of an object is incrementally updated. For example, referring back to FIG. 2, assume that the color of red object 240 is changed to blue, and that prior to the color change, the anti-aliasing filter used the original red color of red object 240 when selecting the color for the pixels along edge 260 between red object 240 and white object 250. In conventional systems, once the filter is applied to the red object, the original color information is discarded. Thus, only the filtered colors for the pixels along edge 260 are maintained. Therefore, when red object 240 is re-rendered to change the color to blue, without re-rendering white table 330, filter 130 is applied again using the blue color of the pixels and subpixels of red object 240 (which is now blue in color, not red) along edge 260. Because white object 250 object was not re-rendered, filter 130 must use the previously filtered colors where white object 250 was present. These pixels, however, still retain traces of the previous filtering of red object 240 before the color change. Hence both the original red color of red object 240 and the new blue color for red object 240 will influence the refiltered pixels along edge 260. The result is a purple (blue+red) halo around re-rendered red object 240 that now has the blue color. In FIG. 2, the halo effect would appear in the block of pixels within red object 240 that is defined by pixels (10,8) and (12,12) since those pixels will be influenced by the pixels along edge 260 when filter 130 is used.
Moreover, if red object 240 (now with the color blue) is re-rendered and re-filtered again due to other changes, such as different shading effects, a further contribution of the original red color is added, which magnifies the halo effect. Thus, as red object 240 is repeatedly refiltered, the purple halo becomes brighter and more noticeable.
One approach for removing the halo effect is to re-render the entire raster image so that there are no contributions from previous colors. However, such an approach requires considerable computations and defeats the purpose of the incremental update technique, which is to render less than the entire image to improve performance. Another approach for removing the halo effect is to forgo the use of the anti-aliasing filters. However, this approach results in the appearance of jagged edges around the objects shown in the raster image, thereby degrading the quality of the objects shown in the raster image.
Based on the foregoing, it is desirable to provide improved techniques for incrementally updating graphical images when using anti-aliasing techniques.