Since multimedia applications are becoming more and more popular, the video compression techniques are also becoming increasingly important. The main principle of these compression techniques is to eliminate redundancy among successive frames to reduce the storage requirement and the amount of transmission data. Intra prediction and inter prediction are two new coding techniques developed by H.264/AVC compression standard. The intra prediction technique utilizes the spatial correlation among neighboring blocks within one frame, while the inter prediction technique utilizes the temporal correlation among consecutive frames.
For the inter prediction, the H.264/AVC standard defines seven inter modes of different block sizes for a 16×16 macroblock: 16×16 (T1 mode), 16×8 (T2 mode), 8×16 (T3 mode), 8×8 (T4 mode), 8×4 (T5 mode), 4×8 (T6 mode), and 4×4 (T7 mode), as shown in FIG. 1. After coded, each block can be represented by residual value and motion vector thereof. The display quality can be improved by selecting the block of small size, but with the detriment of increased complexity and time demands. Therefore, considering both of the display quality and the coding efficiency, different block size is adopted according to the complexity of the display image so that high efficient video compression can be achieved.
After being compressed, the video data are transformed into video bitstreams which are suitable for transmission and storage. However, transmission of highly compressed video bitstreams can suffer from packet erasures (especially in regard to wireless video transmission). In order to avoid the degradation in quality of the received video frames caused by video packet erasures, three mechanisms are commonly used to guard against possible packet losses: automatic retransmission request (ARQ), forward error correction (FEC), and error concealment. Compared with the ARQ and FEC techniques, the error concealment techniques do not require additional bandwidth and are especially useful in multicast and broadcast situations. The error concealment techniques executed at the video decoder can be classified into two types: spatial error concealment and temporal error concealment. The spatial error concealment utilizes spatial redundancy information in a frame to recover the damaged video sequences, while the temporal error concealment exploits the high correlation among consecutive frames of the coded sequence to reconstruct the damaged video sequences. Since the high temporal correlation usually exists among consecutive frames, except for some specific situations (such as scene changes, or the appearance and disappearance of objects), the use of temporal error concealment can offer better video quality compared to the spatial error concealment.
Since the temporal error concealment approach conceals the damaged block by utilizing the reference frames in order to exploit temporal redundancy in a video sequence, the motion vector of the damaged block should be estimated at the start. Various simple methods for predicting the motion vector of the damaged block have been proposed, such as utilization of a zero motion vector, utilization of the average of the motion vectors from spatially neighboring blocks, utilization of the temporally corresponding motion vectors of corresponding blocks in the reference frames, etc. Additional information in this regard can be found in “Error control and concealment for video communication: a review” by Y. Wang et al., Proc. IEEE, vol. 86, no. 5, pp. 974-997, May 1998; “Enhanced error concealment with mode selection” by D. Agrafiotis et al., IEEE Trans. Circuits Syst. Video Technology, vol. 16, no. 8, pp. 960-973, August 2006; “An efficient error concealment implementation for MPEG-4 video streams” by S. Valente et al., IEEE Trans. Consumer Electronics, vol. 47, no. 3, pp. 568-578, August 2001; “A novel selective motion vector matching algorithm for error concealment in MPEG-4 video transmission over error-prone channels” by B. Yan et al., IEEE Trans. Consumer Electronics, vol. 49, no. 4, pp. 1416-1423, November 2003; “A cell-loss concealment technique for MPEG-2 coded video” by J. Zhang et al., IEEE Trans. Circuits Syst. Video Technol., vol. 10, no. 4, pp. 659-665, June 2000; “Robust error concealment for visual communications in burst-packet-loss networks” by J. Y. Pyun et al., IEEE Trans. Consum. Electron., vol. 49, no. 4, pp. 1013-1019, November 2003; and “Temporal Error Concealment for H.264 Using Optimum Regression Plane” by S. C. Huang et al., in Proc. Int. Conf. MultiMedia Modeling (MMM), January 2008, LNCS 4903, pp. 402-412, the entire contents of which being incorporated herein by reference.
Furthermore, other improvements for the temporal error concealment approach include “Temporal error concealment using motion field interpolation” by M. E. Al-Mualla et al, Electron. Lett., vol. 35, pp. 215-217, 1999; “Vector rational interpolation schemes for erroneous motion field estimation applied to MPEG-2 error concealment” by S. Tsekeridou et al., IEEE Trans. Multimedia, vol. 6, no. 6, pp. 876-885, December 2004; “A motion vector recovery algorithm for digital video using Lagrange interpolation” by J. Zheng et al., IEEE Trans. Broadcast., vol. 49, no. 4, pp. 383-389, December 2003; “Error-concealment algorithm for H.26L using first-order plane estimation” by J. Zheng et al., IEEE Trans. Multimedia, vol. 6, no. 6, pp. 801-805, December 2004; “Efficient motion vector recovery algorithm for H.264 based on a polynomial model” by J. Zheng et al., IEEE Trans. Multimedia, vol. 7, no. 3, pp. 507-513, June 2005; “A concealment method for video communications in an error-prone environment” by S. Shirani et al., IEEE Journal on Selected Areas in Communication, Vol. 18, pp. 1122-1128, June 2000; “Multiframe error concealment for MPEG-coded video delivery over error-prone networks” by Y. C. Lee et al., IEEE Trans. Image Process., vol. 11, no. 11, pp. 1314-1331, November 2002; “POCS-based error concealment for packet video using multiframe overlap information” by G. S. Yu et al, IEEE Trans. Circuits Syst. Video Technol., vol. 8, pp. 422-434, August 1998; “Concealment of whole-frame losses for wireless low bit-rate video based on multiframe optical flow estimation” by S. Belfiore et al., IEEE Trans. Multimedia, vol. 7, no. 2, pp. 316-329, April 2005; and “Frame concealment for H.264/AVC decoders” by P. Baccichet et al., IEEE Trans. Consumer Electronics, vol. 51, no. 1, pp. 227-233, February 2005, the entire contents of which being incorporated herein by reference.
Although many techniques for improving the temporal error concealment approach have been proposed, both of the prediction accuracy of the motion vector and the quality of concealment need to be further optimized.