1. Field of the Invention
The present invention relates generally to image processing, and more particularly, to a method for combining two images in order to form a composite image. Still more particularly, the present invention is a method for performing a parallel interpolation to form a composite of two images on a pixel-by-pixel basis.
2. Description of the Background Art
The fundamental data unit used to perform computational operations is known as a data word. Execution of a given instruction may result in a plurality of data words being fetched from memory, arithmetic or logical operations being performed on any or all of these data words, and the storage in memory of one or more data words corresponding to a result. In modern computer systems having a CPU based upon a microprocessor such as the Intel 80486 or Motorola MC68040, the data word size is 32 binary digits (bits). Nearly all instructions available on a given CPU result in the execution of one or more operations on either single-word or double-word quantities.
In computer graphics or image processing applications, an image is comprised of an array of values. Each value of the array corresponds to one picture element (pixel) of the image. Each pixel is typically represented by a plurality of quantities that specify color, shading, or other pixel characteristics. These quantities are generally smaller than a modern computer system's data word, and are commonly represented by 8-bit byte values. An exemplary pixel representation is shown in FIG. 1, where four single-byte values are used to indicate red, green, blue, and opacity characteristics for each pixel. All four bytes are stored or "packed" into a single 32-bit data word. Although other word sizes and pixel characteristic representations exist, the situation shown in FIG. 1 is quite common and will be considered herein.
An important image processing operation is blending, where two images must be combined on a pixel-by-pixel basis to produce a composite image. In forming the composite image, a first image is defined as a foreground image, and a second image is defined as a background image. Their combination is accomplished through an interpolation between corresponding pixels in each image, where each pixel's characteristics are scaled in relation to a blending factor. The blending factor indicates a fractional constant k by which the foreground pixel characteristics are scaled; the corresponding background pixel characteristics are scaled by (1-k). The scaled foreground pixel characteristics are added to the corresponding scaled background pixel characteristics to produce the characteristics of the composite pixel. In certain blending situations, the foreground image is identical to the background image. In this case, the blending is an interpolation between pixel values within a single source image to determine image characteristics at locations that do not precisely correspond to pixel locations. This type of blending occurs in antialiasing, panning, and texture mapping situations.
Since multiple pixel characteristics are packed into a data word, performing operations on the packed data word can result in one or more pixel characteristics within the packed data word having incorrect values. For instance, if the packed data word is shifted, one or more bits are translated from a given pixel characteristic into an adjacent pixel characteristic. This corrupts the value of the adjacent pixel characteristic. The prior art methods store each pixel characteristic (i.e., byte) within an individual data word (unpacking) to eliminate this problem. The prior art method for performing the blending interpolation is shown in the flowchart of FIG. 2. In the first step, the blending factor and packed data words corresponding to the foreground pixel and the background pixel are retrieved. Next, each first pixel characteristic within the packed foreground pixel data word is placed into an individual data word (unpacked), after which each corresponding pixel characteristic within the packed background pixel data word is placed into an individual data word. The unpacking process is carried out through logical and shifting operations. Next, the mathematical operations required for interpolation are carried out on each corresponding pair of data words. These operations comprise adding appropriately shifted versions of the original packed data words, the effect of which is to multiply one data word by a fractional constant k=m2.sup.-n specified by the blending factor, followed by multiplying the other data word by (1-k), and forming the sum of the two data words. Each result is then stored in a packed result data word via a packing process. This step, as with the earlier unpacking process, is implemented with logical and shifting operations. Once interpolation has taken place between all appropriate pixel characteristic pairs and the results have been packed into the packed result data word, the method ends, with the composite pixel characteristics contained within the packed result word. For the pixel representation shown in FIG. 1, the prior art method requires eight unpacking steps and four packing steps. As detailed in Appendix A, a pseudo-code assembly language program is shown which performs an exemplary interpolation between foreground and background pixel characteristics using the prior art method. The blending is particularly simple in this case, wherein the composite pixel characteristics are comprised of fifty percent of the foreground pixel's characteristics and fifty percent of the background pixel's characteristics. In other words, this is a scaling of the foreground and background pixel characteristics by 2.sup.-1. As can be seen from the pseudo-code, 47 assembly language steps are required.
Each unpacking and packing step in the prior art method requires a given amount of time to complete. Moreover, replication of the same mathematical operations required for interpolation adds to the time required to complete the interpolation process. Current technology has made an image size of 1024.times.768, or 786,432 pixels commonplace. In the pixel representation depicted in FIG. 1, such an image comprises millions of pixel characteristics. As a result, millions of interpolations and an even greater number of computational operations must be performed when forming a composite image. Any reduction in the time required for a pixel-by-pixel operation will therefore significantly decrease the overall time required to modify the image displayed. What is needed is a method for performing an interpolation in which operations occur simultaneously on all pixel characteristics within two packed data words. This would eliminate the need to unpack and repack data words and ensure that operations required for the interpolation are performed only one time, thereby minimizing the interpolation time.