The present invention relates to digital decoders of sequences of video images, and, more particularly, to a method for recognizing the progressive or interlaced content of an image to improve the effectiveness of the video coding for low cost applications. Due to the importance of the MPEG standard in treating digitized video sequences, reference will be made to an MPEG2 system to illustrate the present invention. The present invention is also applicable to systems that transfer video sequences based on different standards.
The MPEG (Moving Pictures Experts Group) standard defines a set of algorithms dedicated to the compression of sequences of digitized pictures. These techniques are based on the reduction of the spatial and temporal redundance of the sequence. Reduction of spatial redundance is achieved by compressing independently the single images via quantization, discrete cosine transform (DCT) and Huffman coding.
The reduction of temporal redundance is obtained using the correlation that exist between successive pictures of a sequence. Each image can be expressed locally as a translation of a preceding and/or successive image of the sequence. To this end, the MPEG standard uses three kinds of pictures; I (Intra Coded Frame), P (Predicted Frame) and B (Bidirectionally Predicted Frame). The I pictures are coded in a fully independent mode. The P pictures are coded with respect to a preceding I or P picture in the sequence. The B pictures are coded with respect to two pictures of the I or P kind, which are the preceding one and the following one in the video sequence (see FIG. 1).
A typical sequence of pictures can be I B B P B B P B B I B . . . , for example. This is the order in which they will be viewed. Given that any P is coded with respect to the preceding I or P, and any B is coded with respect to the preceding and following I or P, it is necessary that the decoder receive the P pictures before the B pictures, and the I pictures before the P pictures. Therefore, the order of transmission of the pictures will be I P B B P B B I B B . . .
Pictures are processed by the coder sequentially, in the indicated order, and are successively sent to a decoder which decodes and reorders them, thus allowing their successive displaying. To code a B picture it is necessary for the coder to keep in a dedicated memory buffer, called frame memory, the I and P pictures, coded and thereafter decoded, to which current B picture refers, thus requiring an appropriate memory capacity.
One of the most important functions in coding is motion estimation. Motion estimation is based on the following consideration. A set of pixels of a frame of a picture may be placed in a position of the successive picture obtained by translating the preceding one. These transpositions of objects may expose parts that were not visible before as well as changes of their shape, such as during a zooming, for example.
The family of algorithms suitable to identify and associate these portions of pictures is generally referred to as motion estimation. Such an association of pixels is instrumental to calculate a difference picture removing redundant temporal information, thus making more effective the successive processes of DCT compression, quantization and entropic coding.
A typical example of a system using this method may be illustrated based upon the MPEG-2 standard. A typical block diagram of a video MPEG-2 coder is depicted in FIG. 1. Such a system is made of the following functional blocks:
1) Chroma filter block from 4:2:2 to 4:2:0. In this block there is a low pass filter operating on the chrominance component, which allows the substitution of any pixel with the weighed sum of neighboring pixels placed on the same column and multiplied by appropriate coefficients. This allows a successive subsampling by two, thus obtaining a halved vertical definition of the chrominance.
2) Frame ordinator. This blocks is composed of one or several frame memories outputting the frames in the coding order required by the MPEG standard. For example, if the input sequence is I B B P B B P etc., the output order will be I P B B P B B . . .
The Intra coded picture I is a frame or a semi-frame containing temporal redundance. The Predicted-picture P is a frame or semi-frame from which the temporal redundance with respect to the preceding I or P (precedingly co/decoded) has been removed. The Biredictionally predicted-picture B is a frame or a semi-frame whose temporal redundance with respect to the preceding I and successive P (or preceding P and successive P) has been removed. In both cases the I and P pictures must be considered as already co/decoded.
Each frame buffer in the format 4:2:0 occupies the following memory space:
3) Estimator. This is the block that removes the temporal redundance from the P and B pictures. This functional block operates only on the most energetic component, and, therefore, the richest of information of the pictures which compose the sequence to code, such as the luminance component.
4) DCT. This is the block that implements the discrete cosine transform according to the MPEG-2 standard. The I picture and the error pictures P and B are divided in blocks of 8*8 pixels Y, U, and V on which the DCT transform is performed.
5) Quantizer Q. An 8*8 block resulting from the DCT transform is then divided by a quantizing matrix to reduce the magnitude of the DCT coefficients. In such a case, the information associated to the highest frequencies, less visible to human sight, tends to be removed. The result is reordered and sent to the successive block.
6) Variable Length Coding (VLC). The codification words output from the quantizer tend to contain a large number of null coefficients followed by nonnull values. The null values preceding the first nonnull value are counted and the count figure forms the first portion of a codification word, the second portion of which represents the nonnull coefficient.
These pairs tend to assume values more probable than others. The most probable ones are coded with relatively short words composed of 2, 3 or 4 bits while the least probable are coded with longer words. Statistically, the number of output bits is less than in the case such a criteria is not implemented.
7) Multiplexer and buffer. Data generated by the variable length coder, the quantizing matrices, the motion vectors and other syntactic elements are assembled for constructing the final syntax contemplated by the MPEG-2 standard. The resulting bitstream is stored in a memory buffer, the limit size of which is defined by the MPEG-2 standard requirement that the buffer cannot be overfiled. The quantizer block Q attends to such a limit by making the division of the DCT 8*8 blocks dependent upon how far the system is from the filling limit of such a memory buffer and on the energy of the 8*8 source block taken upstream of the motion estimation and DCT transform steps.
8) Inverse Variable Length Coding (I-VLC). The variable length coding functions specified above are executed in an inverse order.
9) Inverse Quantization (IQ). The words output by the I-VLC block are reordered in the 8*8 block structure, which is multiplied by the same quantizing matrix that was used for its preceding coding.
10) Inverse DCT (I-DCT). The DCT transform function is inverted and applied to the 8*8 block output by the inverse quantization process. This permits passing from the domain of spatial frequencies to the pixel domain.
11) Motion Compensation and Storage. At the output of the I-DCT, the following may be present. A decoded I frame (or semiframe) that must be stored in a respective memory buffer for removing the temporal redundance with respect thereto from successive P and B pictures. A decoded prediction error frame (or semiframe) P or B that must be summed to the information precedingly removed during the motion estimation phase. In case of a P picture, such a resulting sum, stored in dedicated memory buffer is used during the motion estimation process for the successive P pictures and B pictures. These frame memories are distinct from the frame memories that are used for re-arranging the blocks.
12) Display Unit from 4:2:0 to 4:2:2. This unit converts the frames from the format 4:2:0 to the format 4:2:2 and generates the interlaced format for the successive displaying. The chrominance components eliminated by the chroma filter block are restored by interpolation of the neighboring pixels. The interpolation includes in a weighed sum the neighboring pixels for appropriate coefficients, and limits between 0 and 255 the value so obtained.
The arrangement of the functional blocks depicted in FIG. 1 within an architecture implementing the above-described coder is shown in FIG. 2. A distinctive feature is that the frame ordinator block, the motion compensation block for storing the already reconstructed P and I pictures, and the multiplexor and buffer block for storing the bitstream produced by the MPEG-2 coding are integrated in memory devices external to the integrated circuit of the core of the coder. The decoder accesses these memory devices through a single interface suitably managed by an integrated controller.
Moreover, the preprocessing block converts the received pictures from the format 4:2:2 to the format 4:2:0 by filtering and subsampling the chrominance. The post-processing block implements a reverse function during the decoding and displaying phase of the pictures.
During the coding phase, decoding functions are also employed for generating the reference frames for the motion estimation. For example, the first I picture is coded, then decoded, stored (reference the motion compensation and storage block) and used for calculating the prediction error that will be used to code the successive P and B pictures.
The play-back phase of the data stream precedingly generated by the coding process uses only the inverse functional blocks (I-VLC, I-Q, I-DCT, etc.), never the direct functional blocks. From this point of view, it may be said that the coding and the decoding performed for displaying the pictures are nonconcurrent processes within the integrated architecture.
The pre-requisites of a MPEG2 coder will now be discussed. As already described in patent applications EP No. 97830605.8 and EP No. 98830163.6, which are assigned to the assignee of the present invention. The algorithm of motion estimation, for what concerns the first step, makes available the i-th macroblock placed on the preceding Top and Bottom fields in a working memory having the size of a macroblock without any burden in terms of band occupation by the frame memory.
For what concerns the working memory, for example, a macroblock having the 4:2:0 format is made of 6 blocks of 64 pixels each, each pixel being coded with 8 bits. In particular, the proposed method has been optimized to work on the luminance component, therefore, each macroblock is made of 4 luminance blocks.
Referring to FIG. 3, from left to right there may be recognized pictures that will eventually reach the MPEG2 coder and those that are stored in the frame memory as already acquired pictures. Bc is the Bottom field of the current picture which will feed the coder. Tc is the Top field of the current picture which will feed the coder. Bp is the Bottom field of the preceding picture which is stored in the frame memory associated to the coder. Tp is the Top field of the preceding picture which is stored in the frame memory associated to the coder
In view of the foregoing background, an object of the present invention is to detect a progressive or interlaced content of a picture for improving the effectiveness of the coding of video sequences, especially in low cost applications. The effectiveness of the filtering applied on the chrominance component to the pictures input to the coder is improved.
Another object of the invention is to establish whether the picture decomposed in the fields Bc and Tc is progressive or interlaced.
These and other objects, features and advantages in accordance with the present invention are provided by a method for recognizing a progressive or interlaced content of video pictures during their processing in a coder. This is done by defining the Bottom field Bc of the current picture to enter the coder, and the Top field Tc of the current picture to enter the coder. The Bottom field Bp of the preceding picture already acquired by the coder is stored in the associated frame memory, and the Top field Tp of the preceding picture already acquired by the coder is stored in the associated frame memory. This establishes whether the current picture so decomposed in the fields Bc and Tc is progressive or interlaced. The method includes executing the following operations at least on one of the components (luminance or chrominance) of the video signal.
a) Defining a macroblock belonging to a frame of the preceding picture having dimensions R*S pixels, half of it is placed on the Top field Tp and the other half on the Bottom field Bp, each half having dimensions (R/2)*S.
b) For the chosen component of the video signal, calculating a first pair of coefficients (COEFF_1, COEFF_2) equivalent to
the sum, extended to all the columns and to all the even rows of the macroblock, of the absolute values of the differences among the values assumed by the component of the video signal in the pixels of the same column and of consecutive rows belonging to the Top semi-frame and Bottom semi-frame, respectively, and
the sum, extended to all the columns and to each fourth row of the macroblock, of the absolute values of the differences among the values assumed by the component of the video signal in the pixels of the same column and of consecutive rows of the same parity belonging to the Top semi-frame and Bottom semi-frame, respectively.
c) Verifying whether the first one of the coefficients of the pair is greater than or equal to a prefixed first real positive number of times (xcex1) of the second coefficient, incrementing a first counter (CONT_1) at each positive verification.
d) Incrementing a second counter (num_macroblock) at each macroblock so tested.
e) Calculating for each row of each Top semi-frame a second pair of coefficients (COEFF_3, COEFF_4) equivalent to
for each row the sum, extended to all the columns of each semi-frame of the absolute values of the differences among the values assumed by the component of the video signal in pixels of the Bottom semi-frame of the preceding picture and of the Bottom semi-frame the current picture, belonging to the row following the considered row and to the same column, and
the sum, extended to all the columns of each semi-frame of the absolute values of the differences among the values assumed by the component of the video signal in pixels of the same column and, respectively, of the row of the Top semi-frame of the preceding picture and the row following the considered row, belonging to the Bottom semi-frame of the current picture, respectively.
f) Verifying whether the second coefficient f the second pair is greater than or equal to a second prefixed real positive number of times (xcex2) the first coefficient of the second pair, and incrementing a third counter (CONT_2) at each positive verification.
g) Incrementing a fourth counter (NUMxe2x80x94RIGHE) at each row so tested verifying whether the content of the first counter (CONT_1) is greater than or equal to a third prefixed real positive number of times (xcex3) the content of second counter (NUMxe2x80x94MACROBLOCK) and whether, at the same time, the content of the third counter (CONT_2) is greater than or equal to a fourth prefixed real positive number of times (xcex4) the content of the fourth counter (NUMxe2x80x94RIGHE). If so, the frame composed of the Top and Bottom semi-frame is considered an interlaced frame, and if not, the frame is a progressive one.
Preferably, the sums of paragraph e) are calculated after having discarded the first G and the last H rows of each semi-frame. G and H are prefixed integers whose sum is an integer multiple of the number of rows of the preceding macroblock.
Preferably, the sums of paragraph e) are calculated after having discarded the first I columns on the right and the last L columns on the left of each semi-frame. I and L are prefixed integers whose sum is an integer multiple of the number of columns of the preceding macroblock. The numbers I and L can be set equal to the number of columns of the preceding macroblock.