Digital video transmission systems have come into increasingly widespread use with the development of MPEG-2 and other video compression techniques. Motion video signals typically contain a significant amount of intra-frame or "spatial" redundancy as well as inter-frame or "temporal" redundancy. Video compression techniques take advantage of this spatial and temporal redundancy to significantly reduce the amount of information bandwidth required to transmit, store and process video signals. The MPEG-2 standard was developed by the International Standards Organization (ISO) Moving Picture Experts Group (MPEG) and is described in "Information Technology Generic Coding of Moving Pictures and Associated Audio Information: Video," ISO/IEC DIS 13818-2, which is incorporated herein by reference.
MPEG-2 video compression removes spatial redundancy through a process involving discrete cosine transformation, quantization, zig-zag scanning, run-amplitude coding and variable-length coding. Temporal redundancy is removed through a process of inter-frame motion estimation and predictive coding. MPEG-2 frames may be either intra-coded (I) frames, forward-only predictive (P) frames or bidirectionally-predictive (B) frames. An I frame is encoded using only the spatial compression techniques noted above, while a P frame is encoded using "predictive" macroblocks selected from a single reference frame. A given B frame is encoded using "bidirectionally-predictive" macroblocks generated by interpolating between a pair of predictive macroblocks selected from two reference frames, one preceding and the other following the B frame.
Advances in video compression technology have led to the implementation of low bit rate video transmission systems. For example, wireless communication channels such as those found in cellular and personal communication services (PCS) systems are being utilized to deliver digital video signals at low bit rates to subscribers. A significant problem in these and other video transmission systems relates to the effect of channel errors on the quality of the subsequently decoded video. As noted above, MPEG-2 and other video compression techniques use inter-frame predictive coding to reduce temporal redundancy in a sequence of video frames. A channel error which affects a given macroblock in a particular frame can therefore also affect other frames which make use of the given macroblock in predictive coding.
To combat these effects of channel errors, conventional approaches to providing channel error recovery in low bit rate systems involve periodically transmitting intra-coded macroblocks. The intra-coded macroblocks do not depend on any other macroblock from a previous or subsequent frame, and therefore may be used to "refresh" one or more areas of a frame. The periodic transmission of intra-coded macroblocks effectively removes accumulated error effects which may have been associated with previously-transmitted macroblocks. An important limitation of this technique is that the number of bits required to transmit intra-coded macroblocks usually restricts the number of such macroblocks which can be sent for any particular frame. Typical conventional approaches either randomly or sequentially select which macroblocks to send in intra-coded form, such that different macroblock positions may be refreshed for each frame. However, these and other conventional approaches generally fail to identify adequately which macroblocks should be refreshed in order to provide optimal error recovery capability. A need therefore exists for an improved error recovery technique suitable for use with low bit rate video signals transmitted over wireless communication channels.