A number of different video encoding standards have been established for encoding digital video sequences. The Moving Picture Experts Group (MPEG), for example, has developed a number of standards including MPEG-1, MPEG-2 and MPEG-4. Other examples include the International Telecommunication Union (ITU)-T H.263 standard, and the emerging ITU-T H.264 standard and its counterpart, ISO/IEC MPEG-4, Part 10, i.e., Advanced Video Coding (AVC). These video encoding standards support improved transmission efficiency of video sequences by encoding data in a compressed manner.
Compression reduces the overall amount of data that needs to be transmitted. Typical digital video encoding techniques utilize similarities between successive video frames, referred to as temporal or Inter-frame correlation, to provide Inter-frame compression. The Inter-frame compression techniques exploit data redundancy across frames by converting pixel-based representations of video frames to motion representations. Frames encoded using Inter-frame techniques are referred to as P (“predictive”) frames or B (“bi-directional”) frames. Other frames are Intra-coded using spatial compression.
To meet low bandwidth requirements, some video applications encode video at a reduced frame rate using frame skipping. Unfortunately, low frame rate video can produce artifacts in the form of motion jerkiness. Frame interpolation or extrapolation may be employed at the decoder side to approximate the content of skipped frames and, in effect, upconvert the actual frame rate to provide a perception of smoother motion. Frame interpolation or extrapolation may be used to support frame rate up conversion (FRUC). Although FRUC may enhance temporal quality by interpolating or extrapolating skipped frames, interpolation or extrapolation of some frames may also introduce undesirable spatial artifacts that undermine visual quality of the video sequence.