1. Field of the Invention
This disclosure relates to digital video processing and, more particularly, block-based coding of video data.
2. Description of Related Art
Video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, digital cameras, digital recording devices, cellular or satellite radio telephones, video game counsels, handheld gaming devices, and the like. Digital video coding can provide significant improvements over conventional analog systems in creating, modifying, transmitting, storing, recording and playing full motion multimedia sequences. Broadcast networks may use video coding to facilitate the broadcast of one or more channels of multimedia (audio-video) sequences to wireless subscriber devices. Video coding is also used to support video telephony (VT) applications, such as video conferencing by cellular radio telephones.
A number of different coding standards have been established for coding digital video sequences. The Moving Picture Experts Group (MPEG), for example, has developed a number of standards including MPEG-1, MPEG-2 and MPEG-4. Other standards include the International Telecommunication Union (ITU) H.263 standard and H.264 standard, QuickTime™ technology developed by Apple Computer of Cupertino Calif., Video for Windows™ developed by Microsoft Corporation of Redmond, Wash., Indeo™ developed by Intel Corporation, RealVideo™ from RealNetworks, Inc. of Seattle, Wash., and Cinepak™ developed by SuperMac, Inc. Furthermore, new standards continue to emerge and evolve. The ITU H.264 standard is also set forth in MPEG-4, Part 10, Advanced Video Coding (AVC).
Most video coding techniques utilize block-based coding, which divides video frames into blocks of pixels and correlates the blocks with those of other frames in the video sequence. By encoding differences between a current block and a predictive block of another frame, data compression can be achieved. The term “macroblock” is often used to define discrete blocks of a video frame that are compared to a search space (which is typically a subset of a previous or subsequent frame of the video sequence). Macroblocks may also be further sub-divided into partitions or sub-partitions. The ITU H.264 standard supports 16 by 16 macroblocks, 16 by 8 partitions, 8 by 16 partitions, 8 by 8 partitions, 8 by 4 sub-partitions, 4 by 8 sub-partitions and 4 by 4 sub-partitions. Other standards may support differently sized blocks, macroblocks, partitions and/or sub-partitions.
For each block (macroblock, partition or sub-partition) in a video frame, an encoder compares similarly sized blocks of one or more immediately preceding video frames (and/or subsequent frames) to identify a similar block, referred to as the “prediction block” or “best match.” The process of comparing a current video block to video blocks of other frames is generally referred to as motion estimation. Once a “best match” is identified for a given block to be coded, the encoder can encode the differences between the current block and the best match. This process of encoding the differences between the current block and the best match includes a process referred to as motion compensation. Motion compensation comprises creating a difference block (referred to as the residual), which includes information indicative of the differences between the current block to be encoded and the best match. In particular, motion compensation usually refers to the act of fetching the best match using a motion vector, and then subtracting the best match from an input block to generate the residual. Additional coding steps, such as entropy coding, may be performed on the residual to further compress the bitstream.