Video information may be represented by progressive video or interlaced video. Modern computer monitors typically display progressive video. Conventional television monitors and older computer monitors typically display interlaced video. High definition television may display both interlaced and progressive video.
Progressive video includes a series of frames, where each frame is drawn as consecutive lines from top to bottom. In interlaced video, each frame is divided into a number of fields. Typically, the frame is divided into two fields, one field containing half of the lines (e.g., the even numbered lines), and the other field containing the other half of the lines (e.g., the odd numbered lines). The interlaced video, however, is still temporally ordered so that neighboring interlaced fields may represent video information sampled at different times.
There is often a need to convert interlaced video into progressive video and vice versa. For example, suppose a television broadcaster transmits a conventional television program as a series of interlaced fields. If these interlaced fields are to be displayed on a modern computer monitor (or on a high definition television display) that displays progressive frames, the interlaced fields must be converted into progressive frames.
The conversion involves using one or more fields of interlaced video to generate a frame of progressive video and repeating the process so that a stream of interlaced video is converted into a stream of progressive video. This conversion is often called “deinterlacing”. There are several conventional methods of deinterlacing.
One conventional deinterlacing method is called “scan line interpolation” in which the lines of a single interlaced field are duplicated to form a first half of the lines in the progressive frame. The second half of the lines in the progressive frame are formed by simply duplicating the same field again and inserting the field offset by one line into the second half of the lines to complete the progressive frame. This basic form of scan line interpolation is computationally straightforward and thus uses little, if any, processor resources. However, the vertical resolution of the progressive frame is only half of what the display is capable of displaying.
One variation on the scan line interpolation method is that the second half of the lines in the progressive frame are generated by interpolating (e.g., averaging) the neighboring lines in the interlaced field. This requires somewhat more computational resources, but results in a relatively smooth image. Still, the vertical resolution is only half of what the display is capable of displaying.
One deinterlacing method that improves vertical resolution over scan line interpolation is called “field line merging” in which lines from two consecutive fields are interweaved to form a progressive frame. However, the video information in the first field is not sampled at the exact same moment as the video information in the second field. If there is little movement in the image between the first and second fields, then field line merging tends to produce a quality image at relatively little processing costs. On the other hand, if there is movement between the first and second fields, simply combining fields will not result in a high fidelity progressive frame since half the lines in the frame represent the video data at a given time, and half the lines in the frame represent a significantly different state at a different time.
Higher processing methods use complex motion compensation algorithms to determine where in the image there is motion, and where there is not. For those areas where there is no motion, field line merging is used because of its improved vertical resolution. For those areas where there is motion, scan line interpolation is used since it eliminates the motion artifacts that would be caused by field line merging. Such motion compensation algorithms may be implemented by the motion estimation block of an MPEG encoder. However, such complex motion compensation methods require large amounts of processing and memory resources.
Therefore, methods and systems are needed for deinterlacing to provide a relatively high fidelity progressive frame without having to dedicate the processor and memory resources required by complex motion compensation algorithms.
Moreover, due to different display device hardware and software configurations, both progressive and interlaced Video data often needs to be resampled or resized to present quality images based on a particular device's display resolution configuration. For instance, a computer monitor typically displays a fixed area of pixels in resolutions such as 640×480 pixels-per-inch (ppi), 800×600 ppi, 1024×768 ppi, and so on, as determined by the current display settings of the computer monitor.
Conventional systems and techniques to resample progressive video data result in image data of generally acceptable viewing quality. However, due to temporal differences between two fields of an interlaced video frame, interlaced data resampling is not well understood.
For instance, NTSC interlaced video sequences contain frames with two fields that differ temporally by 1/60th of a second. This temporal difference prevents resampling the two fields jointly (i.e. on a frame basis). That is, jointly resampling two interlaced fields of a frame generates a temporally aliased image, wherein otherwise smooth curves and lines are jagged and typically not of high viewing quality. Additionally, if the fields are resampled individually and then interleaved to form a frame, spatial resolution is lost since each field contains only ½ of the frame resolution. Accordingly, conventional systems and techniques to resample interlaced video data frames are substantially limited.
The following described systems and methods address these and other limitations of traditional systems and procedures to deinterlace video data and/or resample interlaced video data.