Uncompressed Video in digital format requires large amount of storage space and data transfer bandwidth. Since a large requirement for storage space and data transfer bandwidth translates into an increase in video transmission and distribution costs, compression techniques have been developed to compress the video in a manner to minimize its size while maximizing its quality. Numerous intra- and inter-frame compression algorithms have been developed that compress multiple frames that include frequency domain transformation of blocks within frames, motion vector prediction which reduces the temporal redundancy between the frames, entropy coding etc.
Interframe compression entails synthesizing subsequent images from a reference frame by the use of motion compensation. Motion compensation entails application of motion vector estimation algorithms, for example, block matching algorithm to identify temporal redundancy and differences in successive frames of a digital video sequence and storing the differences between successive frames along with an entire image of a reference frame, typically in a moderately compressed format. The differences between successive frames are obtained by comparing the successive frames with the reference frame which are then stored. Periodically, such as when a new video sequence is displayed, a new reference frame is extracted from the sequence, and subsequent comparisons are performed with this new reference frame. The interframe compression ratio may be kept constant while varying the video quality. Alternatively, interframe compression ratios may be content-dependent, i.e., if the video clip being compressed includes many abrupt scene transitions from one image to another, the compression is less efficient. Examples of video compression which use an interframe compression technique are Moving Picture Experts Group (MPEG), Data Converter Interface (DVI) and Indeo, among others.
Several of these interframe compression techniques, viz., MPEG, use block based video encoding that in turn utilizes Discrete Cosine Transform (DCT) based encoding. The DCT coefficients generated are scanned in zig-zag order and are entropy encoded using various schemes. In addition to encoding of spatial information of the successive frames, the temporal information of the successive frames in terms of motion vectors is also encoded using entropy based schemes.
In addition to the spatial and temporal information, color information corresponding to the successive frames is also compressed by exploiting the poor color acuity of vision. The video signals are represented by a luma component Y′ and two chroma components CB and CR, in which CB and CR are the blue-difference and red-difference chroma components, respectively. As long as the luma component Y′ is conveyed with full detail, detail in the chroma components CB and CR can be reduced by subsampling (filtering, or averaging).
However, there are cases where the encoded stream is captured from a storage media device or through a transmission medium. Due to errors in capturing (such as reading from digital or analog tapes) or transmission medium (over wireless or lossy networks), bit-errors may be introduced that may lead to errors in decoding of captured or received encoded stream. This in turn leads to erroneous decoding/loss of information, i.e., either of the luma Y′ and the chroma components CB and CR. If the aforementioned errors result into loss of information in chroma channels, they are termed as chroma dropout errors. The chroma dropout errors affect a final user viewing experience and it becomes quite important for media service and content providers to verify the quality of the delivered content. The verification can be manual checking of video data but that would be impractical and unreliable.
In light of the above, there is a need for an invention that may enable automated detection of the chroma dropout errors that is accurate and is not computation intensive.