There are two basic components to an image warp: (1) spatial transformation and (2) resampling through interpolation. A spatial transformation defines a geometric relationship between each point in an input and a warped image.
Inverse mapping is typically used to generate the warped image from the input image. The inverse mapping specifies reference locations in the source image as a function of the current location in the warped image. Commonly, the locations in the warped image are processed sequentially to generate the warped image from the input image. At each integer position in the warped image, the location of a corresponding pixel value in the input image is calculated. The pixel addresses in the input image may not be integer values because the inverse mapping can be arbitrary. In other words, the inverse mapping maps a pixel value in the warped image to a pixel value in the input image which may be located between pixels in the input image. Therefore, it is desirable to use interpolation to generate the pixel value from the non-integral positions in the input image.
Typically, several pixels surrounding the referenced input location are used to compute the output pixel value in the warped image. The larger the number of pixels used, the more accurate the resampling. As the number of pixels increases, however, so does the cost and complexity of the hardware implementation.
Current hardware approaches for performing the interpolation required for resampling use a two-by-two neighborhood of pixels around the address value of the pixel in the source image to calculate each pixel value in the warped image. This is commonly called bilinear interpolation. Bilinear interpolation uses a local neighborhood of four pixels and a weighted average of those four pixels for the interpolation. For a real-time implementation four pixel values are accessed simultaneously every clock cycle. The four pixel values are then multiplied by the appropriate weights and summed to produce a pixel value in the warped image.
One such system for performing bilinear interpolation is described in Real-time Bilinear Interpolation Using the TMC2301 by Steve Gomez, TRW LSI Products Division, and dated Jan. 21, 1989. This system uses four separate memories that can be accessed in parallel when performing bilinear interpolation. The system also includes a look-up table for storing the coefficients to determine the weighted average. The weighting coefficients are multiplied by respective pixel values from the input image and, then, summed to produce a pixel value in the warped image. Real-time Bilinear Interpolation Using the TMC2301 is herein incorporated by reference for its teachings on bilinear interpolation.
The difficulty with this and other similar systems is when bilinear interpolation is no longer adequate. In applications that require repeated warping of the same image or subpixel translations, bilinear interpolation may produce poor results. Typically, bilinear interpolators, degrade the high spatial frequency components of an image to a greater extent than a higher quality interpolator.
The next higher quality interpolator uses a three-by-three pixel area in the input image to compute each pixel value in the warped image. The complexity and expense of this type warper dramatically increases relative to that of a bilinear interpolator. Nine separate memories, nine coefficients, and a nine term sum of products is required. In applications where size, power, and cost are at a premium, this is an unacceptable solution. If a better interpolation is required, for example, four-by-four or greater, this problem is compounded.
To overcome these shortcomings, a new warper method and system is provided.