1. Technical Field
The present disclosure generally relates to acts and devices to convert interlaced video into a de-interlaced form. More particularly, but not exclusively, the present disclosure relates to electronic devices that employ techniques to detect a still motion condition and improve the quality of de-interlaced video.
2. Description of the Related Art
In a conventional interlaced video image, each full frame of displayable video information includes two fields or sub-frames. One field is designated “odd,” and the other field is designated “even.” Each field is constituted with video data representing horizontal lines of displayable video information. The odd fields include the video information for odd horizontal lines of a display (i.e., 1, 3, 5, . . . ), and the even fields include the video information for even horizontal lines of the display (i.e., 2, 4, 6, . . . ). When the conventional video image is formed, the horizontal odd lines of video information are composed, one-by-one, and then the alternate even lines of video information are composed, one-by-one, into the spaces between the odd lines.
In traditional display systems, such as in the cathode-ray tubes (CRTs) found in television sets and early computer monitors, the video information was rendered with an electron gun that scanned the inside face of the CRT on alternate lines beginning in an upper left corner and finishing in a lower right corner. First, odd lines would be rendered, and then even lines would be rendered. These conventional systems are known as interlaced scanning systems. Interlaced scanning systems can provide an effective doubling of the frame rate, which reduces the effects of flicker. Alternatively, interlaced scanning systems can also provide the effect of doubling vertical resolution of an image without an increased requirement for additional bandwidth.
In current display systems, such as those formed as plasma panels, light-emitting diode (LED) panels, and the like, video images are formed as a matrix with individually addressable pixels. In these systems, each individual frame of data is drawn, stored, or communicated in sequence. These systems are known as non-interlaced or progressive scanning systems, and each individual frame may be prepared in a frame buffer having defined rows and columns.
A frame buffer forms a complete video image, which can be rendered as a whole on the display. In sequentially displayed frames, some portions of the display are maintained intact while other portions of the display are updated with new information. For example, when displaying a motion picture of sequential video frames that show an outdoor scene, portions of a background will not change from one frame to the next, but an object moving in the scene may require foreground portions to change from one frame to the next.
Since conventional video sources that communicate interlaced video streams still abound, systems are needed to convert the interlaced video signals to de-interlaced video signals. In some systems, an odd and an even frame are combined into a single output frame for display. In other systems, each odd frame and each even frame is converted using interpolation techniques, for example to create a single output frame.
Both types of conversion, as well as others, cause problems in the output display. One reason for the problems is that each subsequent field of video data represents a portion of a recorded scene from a different time. This is very noticeable when an object in a scene is in motion or when a camera capturing a scene is in motion (e.g., “panning” across a scene). When capturing a video of a person riding a bicycle, for example, the person and the bike are moving in real time, and each subsequent captured image will place the person and the bike at a different location in the frame. As another example, when moving the camera to pan a scene that includes a non-moving object such a fence, the vertical rails of the fence will appear at slightly different locations in successive frames of the captured video. These differences caused by actual or perceived motion of captured video images are first processed by motion detection logic of the system that processes video for display. Subsequently, motion detection signals produced by the motion detection logic are processed by motion compensation algorithms, which are not discussed in the present disclosure.
FIG. 1 is a conventional video system 10 that receives interlaced video data, processes the video data, and displays a de-interlaced video output. The video data may be received wirelessly or over a wired medium (e.g., satellite, cable, electronically read memory, and the like), and the video data may be delivered in nearly any acceptable format. A sequence of odd and even interlaced video fields 12 is communicated to a receiving device 14 (e.g., a set top box). The receiving device 14 includes a wired or wireless front-end 16 to receive the interlaced video 12. The wired or wireless front-end 16 processes the fields and passes some or all of the video data to a motion processing device 22. As subsequent frames of the interlaced video 12 are received and processed, first delay buffer logic 18 and second delay buffer logic 20 capture and store time-separated frames of data.
In one system, the first delay buffer logic 18 stores frames that are one time unit separated from the current video frames of data, and the second delay buffer logic 20 stores frames that are two time units separated from the current video frames of data. In such a system, the current video data may be recognized at time T, the first delay buffer logic 18 video data is recognized as time T−1, and the second delay buffer logic 20 video data is recognized as time T−2.
The motion processing device 22 includes motion detection logic, spatial interpolation logic, and temporal interpolation logic. The motion detection logic looks at individual pixels of video data and determines (i.e., detects) whether pixels in one frame represent motion of an object or scene when compared to a corresponding pixel in a nearby frame. Based on the recognition of some type of motion, the spatial interpolation logic will compensate for spatial motion by applying a particular weighting factor, and the temporal interpolation logic will compensate for time-based motion by applying a different particular weighting factor.
The application of the weighting factors is performed by a mixing circuit 24 which may also be known as a “fader.” The mixing circuit receives information and video data from the motion processing device 22 and produces the de-interlaced output video signal 26.
The de-interlaced output video signal 26 is stored in a frame buffer circuit 28. In the implementation of FIG. 1, the conventional video system 10 employs a conversion technique that combines data from odd and even fields into a single output frame. Alternative video systems may form an individual output frame from each individual odd field and from each individual even field. The representative data from odd and even frames is illustrated in the frame buffer circuit 28 for understanding of the illustration.
The data from the frame buffer circuit 28 is passed as displayable output video 30 to a display device 32.
Various de-interlacing systems have been researched for many years, and some known conventional systems now employ complex processing algorithms.
One such system is described in U.S. Pat. No. 7,193,655, which teaches a process and device for de-interlacing by pixel analysis. The patent describes many implementations including at least one method for de-interlacing a video signal wherein the output is provided either by a temporal interpolation or by a spatial interpolation or a mixture of both. In such a method, the decision for the interpolation is based on motion detection in the relevant area of a window.
Another known system is described in U.S. Pat. No. 8,774,278, which teaches recursive motion for a motion detection de-interlacer device. The patent describes a method for detecting motion in an interlaced video signal using a recursive motion result from a previous instance. A final motion value is used to generate blend factors corresponding to the missing pixels.
An image apparatus and image processing method is described in U.S. Patent Publication No. 2006/0152620A1. The method teaches detecting an image portion that is intended to be a “still image” based on motion detection results. A history value is generated and used to determinate a mixture ratio between an in-field interpolation and an inter-field interpolation.
Yet one more system is described in U.S. Patent Publication No. 2005/0078214A1, which teaches a method and de-interlacing apparatus that employs recursively generated motion history maps. In the patent, a de-interlacer includes recursive motion history map generating circuitry operative to determine a motion value associated with one or more pixels in interlaced fields based on pixel intensity information from at least two neighboring same polarity fields. The recursive motion history map generating circuitry generates a motion history map containing recursively generated motion history values for use in de-interlacing interlaced fields wherein the recursively generated motion history values are based, at least in part, on a decay function.
All of the subject matter discussed in the Background section is not necessarily prior art and should not be assumed to be prior art merely as a result of its discussion in the Background section. Along these lines, any recognition of problems in the prior art discussed in Background section or associated with such subject matter should not be treated as prior art unless expressly stated to be prior art. Instead, the discussion of any subject matter in the Background section should be treated as part of the inventor's approach to the particular problem, which in and of itself may also be inventive.