1. Field of the Invention
The present invention relates generally to digital video compression, and more particularly to a system for displaying field structure encoded MPEG video data streams which lack a strictly enforced alternation of top and bottom fields.
2. Description of the Related Art
Video Compression Background
Full-motion digital video requires a large amount of storage and data transfer bandwidth. Thus, video systems use various types of video compression algorithms to reduce the amount of necessary storage and transfer bandwidth. In general, different video compression methods exist for still graphic images and for full-motion video. Intraframe compression methods are used to compress data within a still image or single frame using spatial redundancies within the frame. Interframe compression methods are used to compress multiple frames, i.e., motion video, using the temporal redundancy between the frames. Interframe compression methods are used exclusively for motion video, either alone or in conjunction with intraframe compression methods.
Intraframe, or still image, compression techniques generally use frequency domain techniques, such as the discrete cosine transform (DCT). Intraframe compression typically uses the frequency characteristics of a picture frame to efficiently encode a frame and remove spatial redundancy. Examples of video data compression for still graphic images are JPEG (Joint Photographic Experts Group) compression and RLE (run-length encoding). JPEG compression is a group of related standards that use the discrete cosine transform (DCT) to provide either lossless (no image quality degradation) or lossy (imperceptible to severe degradation) compression. Although JPEG compression was originally designed for the compression of still images rather than video, JPEG compression is used in some motion video applications. The RLE compression method operates by testing for duplicated pixels in a single line of the bit map and storing the number of consecutive duplicate pixels rather than the data for the pixels themselves.
In contrast to compression algorithms for still images, most video compression algorithms are designed to compress full motion video. As mentioned above, video compression algorithms for motion video use a concept referred to as interframe compression to remove temporal redundancies between frames. Interframe compression involves storing only the differences between successive frames in the data file. Interframe compression stores the entire image of a key frame or reference frame, generally in a moderately compressed format. Successive frames are compared with the key frame, and only the differences between the key frame and the successive frames are stored. Periodically, such as when new scenes are displayed, new key frames are stored, and subsequent comparisons begin from this new reference point. It is noted that the interframe compression ratio may be kept constant while varying the video quality. Alternatively, interframe compression ratios may be content-dependent, i.e., if the video clip being compressed includes many abrupt scene transitions from one image to another, the compression is less efficient. Examples of video compression which use an interframe compression technique are MPEG, DVI and Indeo, among others.
MPEG Background
A compression standard referred to as MPEG (Moving Pictures Experts Group) compression is a set of methods for compression and decompression of full motion video images which uses the interframe and intraframe compression techniques described above. MPEG compression uses both motion compensation and discrete cosine transform (DCT) processes, among others, and can yield compression ratios of more than 200:1.
The two predominant MPEG standards are referred to as MPEG-1 and MPEG-2. The MPEG-1 standard generally concerns inter-field data reduction using block-based motion compensation prediction (MCP), which generally uses temporal differential pulse code modulation (DPCM). The MPEG-2 standard is similar to the MPEG-1 standard, but includes extensions to cover a wider range of applications, including interlaced digital video such as high definition television (HDTV).
Interframe compression methods such as MPEG are based on the fact that, in most video sequences, the background remains relatively stable while action takes place in the foreground. The background may move, but large portions of successive frames in a video sequence are redundant. MPEG compression uses this inherent redundancy to encode or compress frames in the sequence.
An MPEG stream includes three types of pictures, referred to as the Intraframe (I), the Predicted frame (P), and the Bi-directional Interpolated frame (B). The I or Intraframes contain the video data for the entire frame of video and are typically placed every 10 to 15 frames. Intraframes provide entry points into the file for random access, and are generally only moderately compressed. Predicted frames are encoded with reference to a past frame, i.e., a prior Intraframe or Predicted frame. Thus P frames only include changes relative to prior I or P frames. In general, Predicted frames receive a fairly high amount of compression and are used as references for future Predicted frames. Thus, both I and P frames are used as references for subsequent frames. Bi-directional pictures include the greatest amount of compression and require both a past and a future reference in order to be encoded. Bi-directional frames are never used as references for other frames.
It is noted that MPEG compression is based on two types of redundancies in video sequences, these being spatial, which is the redundancy in an individual frame, and temporal, which is the redundancy between consecutive frames. Spatial compression is achieved by considering the frequency characteristics of a picture frame. Each frame is divided into non-overlapping blocks and respective sub-blocks, and each block is transformed via the discrete cosine transform (DCT).
Because of the picture dependencies, i.e., the temporal compression, the order in which the frames are transmitted, stored, or retrieved is not necessarily the display order, but rather an order required by the decoder to properly decode the pictures in the data stream. For example, a typical sequence of frames, in display order, might be shown as follows:
__________________________________________________________________________ I B B P B B P B B P B B I B B P B B P 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 __________________________________________________________________________
By contrast, the data stream order corresponding to the given display order would be as follows:
__________________________________________________________________________ I P B B P B B P B B I B B P B B P B B 0 3 1 2 6 4 5 9 7 8 12 10 11 15 13 14 18 16 17 __________________________________________________________________________
Because the B frame depends on a subsequent I or P frame in display order, the I or P frame must be transmitted and decoded before the dependent B frame.
The video decoding process is generally the inverse of the video encoding process and is employed to reconstruct a motion picture sequence from a compressed and encoded data stream. The data in the data stream is decoded according to a syntax that is defined by the data compression algorithm. The decoder must first identify the beginning of a coded frame, identify the type of frame, then decode the image data in each individual frame. In accordance with the discussion above, the frames may also need to be re-ordered before they are displayed in accordance with their display order instead of their coding order. After the frames are re-ordered, they may then be displayed on an appropriate display device.
As the encoded video data is decoded, the decoded data is stored into a frame store buffer. The decoded data is in the form of decompressed or decoded I, P or B frames. A display controller retrieves the frame data for display by an appropriate display device, such as a TV monitor or the like. The present disclosure relates to MPEG-2 decoders compliant with International Standards Organization/International Electro-technical Commission (ISO/IEC) 2-13818 for supporting NTSC (National Television Standards Committee) or PAL (Phase Alternating Line) standards. The NTSC standard is primarily for use in the United States (U.S.), whereas the PAL standard is primarily for use in Europe.
A television picture is typically comprised of two fields, referred to as the top and bottom field. The top field contains every other scan line in the picture beginning with the first scan line. The bottom field contains every other line beginning with the second line. In other words, the top field comprises the odd horizontal scan lines, and the bottom field comprises the even horizontal scan lines. A television scans or draws all the top field lines, followed by all the bottom field lines, in an interlaced fashion. Hence, the display controller preferably provides the top and bottom fields to the display in strict alternation.
A picture encoded using the MPEG2 coding standard may be encoded in either a progressive or interlaced format, referred to as a frame picture structure or field picture structure, respectively. Where a video sequence is encoded using the field structure, i.e., in interlaced format, problems may arise in the decoding because the field picture structure is decoded in an interlaced order which may not be synchronized with the interlaced order of the display sequence. While the display order of top and bottom fields occurs in strict alternation, the order within the decoded sequence may not adhere to a strict alternation. The use of a technique known as 3:2 pulldown naturally provides for an evolution of the decoded field order, which under normal decoding circumstances yields a strictly alternating field presentation to the display.
A repeat-first-field flag is associated with each frame in an MPEG sequence encoded for pulldown. This repeat-first-field flag is asserted for frames in which a 3:2 pulldown, i.e. a repeat of the first field subsequent to the display of the second field. This results in the current frame extending over three field intervals rather than the normal two. Use of periodic pulldowns provides a technique for frame-rate conversion, for example, 24 frames per second may be converted to 30 frames per second by executing a pulldown for every-other frame.
In a well-encoded MPEG data stream, the encoder will set each frame's top-field-first and repeat-first-field indicators such that each frame's last field is the opposite of the next frame's first field, e.g., if frame P.sub.0 ends on a bottom field, then frame P.sub.1 begins on a top field. Although a good encoder will generate this alternation, it is not required by the MPEG standard. Also, even if all encoders generate this alternation, the alternation can be broken when multiple data streams are indiscriminately concatenated to form a new data stream.
One source of such concatenation is commercial advertisements. A broadcaster may concatenate clips from several different movies, insert non-3:2 pulldown commercials into a movie, or distribute such commercials among concatenated movie clips. Such mixing may break the alternation described above. A frame's last field may be the same as the next frame's first field, rather than the opposite.
Under such conditions, the decoder could present fields in the wrong order. The encoder encodes fields in the temporal order in which it intends the decoder to present them. If the encoder intends the top field to be presented first, the encoder encodes the top field first. The same is true for the bottom field. If the decoder presents the top field when it should have presented the bottom field, field inversion occurs. If the decoder avoids field inversion by reversing the time order of the top and bottom fields, temporal distortion occurs.
An interruption of the desired field alternation can result in a sustained, undesirable field inversion. A field inversion occurs when the bottom field is displayed on the odd lines and the top field is displayed on the even lines. This provides a noticeable distortion in the display. Consider a smooth diagonal line traversing the display. Field inversion will cause this line to have zig-zag, or jagged, edges.
Similarly, an interruption of the desired field alternation can result in a sustained, undesirable time distortion. A time distortion occurs when the top and bottom fields are displayed in a time reversed order. This too causes a noticeable distortion. Consider an object moving from left to right across the display. During temporal distortion, the object momentarily appears to reverse direction. While a one-occurrence time reversal is typically unnoticeable, when this distortion persists the motion on the display becomes noticeably jerky.
The amount of memory is a major cost item in the production of video decoders. Thus, it is desired to provide a solution to the described problem without increasing the memory requirements more than is essential.