The present invention relates generally to digital video compression, and, more particularly, to a motion estimation search engine for a digital video encoder that is simpler, faster, and less expensive than the presently available technology permits.
Many different compression algorithms have been developed in the past for digitally encoding video and audio information (hereinafter referred to generically as xe2x80x9cdigital video data streamxe2x80x9d) in order to minimize the bandwidth required to transmit this digital video data stream for a given picture quality. Several multimedia specification committees have established and proposed standards for encoding/compressing and decoding/decompressing audio and video information. The most widely accepted international standards have been proposed by the Moving Pictures Expert Group (MPEG), and are generally referred to as the MPEG-1 and MPEG-2 standards. Officially, the MPEG-1 standard is specified in the ISO/IEC 11172-2 standard specification document, which is herein incorporated by reference, and the MPEG-2 standard is specified in the ISO/IEC 13818-2 standard specification document, which is also herein incorporated by reference. These MPEG standards for moving picture compression are used in a variety of current video playback products, including digital versatile (or video) disk (DVD) players, multimedia PCs having DVD playback capability, and satellite broadcast digital video. More recently, the Advanced Television Standards Committee (ATSC) announced that the MPEG-2 standard will be used as the standard for Digital HDTV transmission over terrestrial and cable television networks. The ATSC published the Guide to the Use of the ATSC Digital Television Standard on Oct. 4, 1995, and this publication is also herein incorporated by reference.
In general, in accordance with the MPEG standards, the audio and video data comprising a multimedia data stream (or xe2x80x9cbit streamxe2x80x9d) are encoded/compressed in an intelligent manner using a compression technique generally known as xe2x80x9cmotion codingxe2x80x9d. More particularly, rather than transmitting each video frame in its entirety, MPEG uses motion estimation for only those parts of sequential pictures that vary due to motion, where possible. In general, the picture elements or xe2x80x9cpixelsxe2x80x9d of a picture are specified relative to those of a previously transmitted reference or xe2x80x9canchorxe2x80x9d picture using differential or xe2x80x9cresidualxe2x80x9d video, as well as so-called xe2x80x9cmotion vectorsxe2x80x9d that specify the location of a 16-by-16 array of pixels or xe2x80x9cmacroblockxe2x80x9d within the current picture relative to its original location within the anchor picture. Three main types of video frames or pictures are specified by MPEG, namely, I-type, P-type, and B-type pictures.
An I-type picture is coded using only the information contained in that picture, and hence, is referred to as an xe2x80x9cintra-codedxe2x80x9d or simply, xe2x80x9cintraxe2x80x9d picture.
A P-type picture is coded/compressed using motion compensated prediction (or xe2x80x9cmotion estimationxe2x80x9d) based upon information from a past reference (or xe2x80x9canchorxe2x80x9d) picture (either I-type or P-type), and hence, is referred to as a xe2x80x9cpredictivexe2x80x9d or xe2x80x9cpredictedxe2x80x9d picture.
A B-type picture is coded/compressed using motion compensated prediction (or xe2x80x9cmotion estimationxe2x80x9d) based upon information from either a past and or a future reference picture (either I-type or P-type), or both, and hence, is referred to as a xe2x80x9cbidirectionalxe2x80x9d picture. B-type pictures are usually inserted between I-type or P-type pictures, or combinations of either.
The term xe2x80x9cintra picturexe2x80x9d is used herein to refer to I-type pictures, and the term xe2x80x9cnon-intra picturexe2x80x9d is used herein to refer to both P-type and B-type pictures. It should be mentioned that although the frame rate of the video data represented by an MPEG bit stream is constant, the amount of data required to represent each frame can be different, e.g., so that one frame of video data (e.g., {fraction (1/30)} of a second of playback time) can be represented by x bytes of encoded data, while another frame of video data can be represented by only a fraction (e.g., 5%) of x bytes of encoded data. Since the frame update rate is constant during playback, the data rate is variable.
In general, the encoding of an MPEG video data stream requires a number of steps. The first of these steps consists of partitioning each picture into macroblocks. Next, in theory, each macroblock of each xe2x80x9cnon-intraxe2x80x9d picture in the MPEG video data stream is compared with all possible 16-by-16 pixel arrays located within specified vertical and horizontal search ranges of the current macroblock""s corresponding location in the anchor picture(s). This theoretical xe2x80x9cfull search algorithmxe2x80x9d (i.e., searching through every possible block in the search region for the best match) always produces the best match, but is seldom used in real-world applications because of the tremendous amount of calculations that would be required, e.g., for a block size of Nxc3x97N and a search region of (N+2w) by (N+2w), the distortion function MAE has to be calculated (2w+1)2 times for each block, which is a tremendous amount of calculations. Rather, it is used only as a reference or benchmark to enable comparison of different more practical motion estimation algorithms that can be executed far faster and with far fewer computations. These more practical motion estimation algorithms are generally referred to as xe2x80x9cfast search algorithmsxe2x80x9d.
The aforementioned search or xe2x80x9cmotion estimationxe2x80x9d procedure, for a given prediction mode, results in a motion vector that corresponds to the position of the closest-matching macroblock (according to a specified matching criterion) in the anchor picture within the specified search range. Once the prediction mode and motion vector(s) have been determined, the pixel values of the closest-matching macroblock are subtracted from the corresponding pixels of the current macroblock, and the resulting 16-by-16 array of differential pixels is then transformed into 8-by-8 xe2x80x9cblocks,xe2x80x9d on each of which is performed a discrete cosine transform (DCT), the resulting coefficients of which are each quantized and Huffman-encoded (as are the prediction type, motion vectors, and other information pertaining to the macroblock) to generate the MPEG bit stream. If no adequate macroblock match is detected in the anchor picture, or if the current picture is an intra, or xe2x80x9cI-xe2x80x9d picture, the above procedures are performed on the actual pixels of the current macroblock (i.e., no difference is taken with respect to pixels in any other picture), and the macroblock is designated an xe2x80x9cintraxe2x80x9d macroblock.
For all MPEG-2 prediction modes, the fundamental technique of motion estimation consists of comparing the current macroblock with a given 16-by-16 pixel array in the anchor picture, estimating the quality of the match according to the specified metric, and repeating this procedure for every such 16-by-16 pixel array located within the search range. The hardware or software apparatus that performs this search is usually termed the xe2x80x9csearch engine,xe2x80x9d and there exists a number of well-known criteria for determining the quality of the match. Among the best-known criteria are the Minimum Absolute Error (MAE), in which the metic consists of the sum of the absolute values of the differences of each of the 256 pixels in the macroblock with the corresponding pixel in the matching anchor picture macroblock; and the Minimum Square Error (MSE), in which the metric consists of the sum of the squares of the above pixel differences. In either case, the match having the smallest value of the corresponding sum is selected as the best match within the specified search range, and its horizontal and vertical positions relative to the current macroblock therefore constitute the motion vector. If the resulting minimum sum is nevertheless deemed too large, a suitable match does not exist for the current macroblock, and it is coded as an intra macroblock. For the purposes of the present invention, either of the above two criteria, or any other suitable criterion, may be used.
The various fast search algorithms evaluate the distortion function (e.g., the MAE function) only at a predetermined subset of the candidate motion vector locations within the search region, thereby reducing the overall computational effort. These algorithms are based on the assumption that the distortion measure is monotonically decreasing in the direction of the best match prediction. Even though this assumption is not always true, it can still find a suboptimal motion vector with much less computation.
The most commonly used approach to motion estimation is a hybrid approach generally divided into several processing steps. First, the image can be decimated by pixel averaging. Next, the fast search algorithm operating on a smaller number of pixels is performed, producing a result in the vicinity of the best match. Then, a full search algorithm in a smaller search region around the obtained motion vector is performed. If half-pel vectors are required (as with MPEG-2), a half-pel search is performed as a separate step or is combined with the limited full search.
Even with the great savings that can be achieved in the hybrid approach to motion estimation, an enormous amount of computations still have to be performed for each iteration of computing MAE. Assuming that the distortion function has to be computed every clock cycle for every block offset, which is desirable in demanding applications such as MPEG-2 HDTV where motion block size is 16-by-16, a distortion function computational unit (DFCU) will consist of a number of simpler circuits of increasing bit width starting from 8 (8-bit luminance data is used for motion estimation) to produce MAE. This number will be equal to the sum of the following: 256 subtraction circuits, 256 absolute value compute circuits, 255 summation circuits of increasing bit width, for a total of 757 circuits of increasing bit width starting with 8, per DFCU.
Depending on picture resolution, a number of these extremely complex units will be required for a practical system. Using a smaller number of circuits within a DFCU in order to reuse its hardware is possible, but will substantially increase processing time and may not be acceptable in demanding applications such as HDTV. In this case, the number of DFCUs will to simply have to be increased to compensate by enhanced parallel processing.
The first step in the hybrid approach to motion estimation (rough search) is usually the most demanding step in terms of hardware utilization because it has to cover the largest search region in order to produce a reasonably accurate match.
Based on the above and foregoing, there presently exists a need in the art for a method for motion estimation that enhances the speed at which motion estimation can be performed, that greatly reduces the amount and complexity of the motion estimation or DFCU hardware required to perform motion estimation, and that provides for significant picture quality improvement at a reasonable cost.
The motion estimation method disclosed by the present inventor in co-pending application Ser. No. 09/287,161, filed concurrently herewith, and entitled xe2x80x9cMotion Estimation Method Using Orthogonal-Sum Block Matchingxe2x80x9d, produces a much smaller amount of data that has to be compared in order to identify a best match, and leads to a substantial reduction in the motion estimation search engine hardware requirements, by searching for best matches by comparing unique macroblock signatures rather than by comparing the individual luminance values of the collocated pixels in the current macroblock and the search region. However, this inventive method does not directly address the problem of accelerating the motion estimation search procedure. For example, this motion estimation method using orthogonal-sum block matching involves a separate computation of the orthogonal sums for each macroblock position within the anchor (reference) picture.
The method and device of the present invention greatly reduces the computational requirements and significantly accelerates the motion estimation search by storing in a local memory and extensively reusing previously computed (available) sums to produce the orthogonal sums, thereby also significantly reducing the the motion estimation search engine hardware requirements. Further, the local memory can advantageously be a RAM, e.g., a DRAM or SRAM, as opposed to being implemented as a matrix of shift registers, as is necessary with the presently available technology. However, although this constitutes a novel and presently preferred feature of the present invention, in one of its aspects, this is not in and of itself an essential feature of the present invention, in its broadest sense, as will become fully apparent hereinafter.
The present invention encompasses, in one of its aspects, a method for updating a horizontal sum representing the sum of the values of N pixels contained in a horizontal row of a reference pixel array during a motion estimation search, the method including the steps of computing the horizontal sum; displacing the reference pixel array by one pixel in a horizontal direction; and, updating the horizontal sum to produce a new horizontal sum by adding a new pixel value to the previously-computed horizontal sum, and subtracting an old pixel value no longer contained in the horizontal row of the reference pixel array after the displacing step, from the previously-computed horizontal sum. The displacing and updating steps are preferably repeated until a limit of a horizontal search range is reached. In an exemplary embodiment, the step of computing is performed by using a horizontal sum modifier circuit that accumulates the values of the N pixels contained in the horizontal row of the reference pixel array prior to performing the step of displacing, and the step of updating the horizontal sum is performing by using the horizontal sum modifier circuit to compute the new horizontal sum using the following equation:
OSNEW=OSOLDxe2x88x92a00+ano,
where OSNEW is the new horizontal sum, OSOLD is the horizontal sum prior to the last iteration of the displacing step, a00 is the pixel value of the pixel that was the horizontal origin of the reference pixel array prior to the last iteration of the displacing step, and ano is the pixel value of the pixel that is the horizontal origin of the reference pixel array after the reference pixel array has been displaced by one pixel to the right with respect to the previous position of the reference pixel array as a result of the last iteration of the displacing step.
The present invention, in another of its aspects, encompasses a method for generating a horizontal sum for each of N rows of a reference pixel array and for simultaneously generating a vertical sum for each of M columns of the reference pixel array for each iteration of a horizontal motion estimation search of a prescribed search region of a reference picture, the method including the steps of:
(a) storing initial pixel values corresponding to an initial position of the reference pixel array by storing M individual pixel values in each of N rows of a memory and storing N individual pixel values in each of M columns of the memory;
(b) computing the horizontal sum for each of the N rows of the initial position of the reference pixel array and storing each of the computed horizontal sums;
(c) computing the vertical sum for each of the M columns of the initial position of the reference pixel array and storing the computed vertical sums in a shift register;
(d) displacing the reference pixel array by one pixel in a horizontal direction;
(e) in response to the displacing step:
i) providing N new pixel values, one for each of the N rows of the reference pixel array corresponding to a last column of the reference pixel array after being displaced by one pixel in the horizontal direction;
ii) summing the N new pixel values to produce a new vertical sum, and applying the new vertical sum to the shift register, and shifting the previously-stored vertical sums by one word in the horizontal direction of the motion estimation search, whereby a first-stored vertical sum is discarded and the new vertical sum is stored in the former storage location of a last-stored vertical sum;
iii) outputting a set of M new vertical sums from the shift register;
iv) updating each of the horizontal sums to produce a set of N new horizontal sums by adding the respective one of the N new pixel values to the previously-computed horizontal sum for each of the N rows, and by subtracting respective old pixel values no longer contained in the M columns of the reference pixel array after being displaced by one pixel in the horizontal direction from the previously-computed horizontal sum for each of the N rows; and,
v) outputting the set of N new horizontal sums.
Steps (d) and (e) are preferably repeated until a limit of a horizontal search range is reached. In an exemplary embodiment, step (b) is performed by using N horizontal sum modifier circuits corresponding to respective ones of the N rows of the memory, whereby each of the horizontal sum modifier circuits accumulates the values of the M individual pixel values stored in the respective row of the memory, and step (e) iv) is performed by using the horizontal sum modifier circuits to compute the new horizontal sums for the respective rows of the reference pixel array using the following equation:
OSNEWi=OSOLDixe2x88x92a0oi+anoi,
where OSNEWi is the new horizontal sum for the respective row of the reference pixel array after the last iteration of the displacing step, OSOLDi is the horizontal sum for the respective row of the reference pixel array prior to the last iteration of the displacing step, a00i is the pixel value of the first pixel of the respective row of the reference pixel array prior to the last iteration of the displacing step, and anoi is the pixel value of the last pixel of the respective row of the reference pixel array after the reference pixel array has been displaced by one pixel to the right with respect to the previous position of the reference pixel array as a result of the last iteration of the displacing step.
The present invention, in another of its aspects, encompasses a device for updating a horizontal sum representing the sum of the values of N pixels contained in a horizontal row of a reference pixel array during a motion estimation search during which the reference pixel array is displaced by one pixel in a horizontal search direction during each of a plurality of iterations of the motion estimation search, the device including a horizontal sum modifier circuit that accumulates the values of the N pixels contained in the horizontal row of the reference pixel array prior to any displacement of the reference pixel array to produce the horizontal sum, and that updates the horizontal sum by computing the new horizontal sum using the following equation:
OSNEW=OSOLDxe2x88x92a00+ano,
where OSNEW is the new horizontal sum after the last displacement of the reference pixel array by one pixel in the horizontal direction, OSOLD is the horizontal sum prior to the last displacement of the reference pixel array by one pixel in the horizontal direction, a00 is the pixel value of the pixel that was the horizontal origin of the reference pixel array prior to the last displacement of the reference pixel array by one pixel in the horizontal direction, and ano is the pixel value of the pixel that is the horizontal origin of the reference pixel array after the reference pixel array has been displaced by one pixel to the right with respect to the previous position of the reference pixel array as a result of the last displacement of the reference pixel array by one pixel in the horizontal direction.
The present invention, in yet another of its aspects, encompasses a RAM-based orthogonal-sum generator and a motion estimation search engine that implement the above-described methods of the present invention.