The image quality of flat-panel televisions and other displays has improved in part due to higher refresh or frame rates. While film and video standards such as NTSC and PAL have fixed frame rates, up-converters in modern TV's and displays allows a higher frame rate to be displayed to the viewer. Higher frame rates have less time between adjacent frames, allowing for smaller movement of display objects between frames. Moreover, the LCD display hold time of each frame is reduced at higher frame rates. As a result, the after-image and thus the perceived motion blur is reduced. Motion appears smoother and less jerky with higher frame rates. Newer technologies such as 3D TV's require higher frame rates so that slightly different images can be displayed to each eye using active shutter glasses. In 3D TV's with active shutter glasses, left eye views and right eye views are displayed alternately. So the actual frame rate received by the viewer is reduced by half. Frame rate up-conversion is used to maintain the frame rate for each eye so as to keep motion as smooth as in 2D videos.
While frames could simply be replicated to increase the frame rate, modern graphics processors may create new frames between existing frames. The new frames may be interpolated from the two surrounding original frames. Each frame can be divided into MacroBlocks (MB's) and then generating Motion Vectors (MV's) for each MB. Each MB is then translated along the MV to construct the interpolated frame between the two original frames.
FIG. 1 shows frame interpolation for Frame-Rate Conversion (FRC). Frames 1, 2, 3 are original frames in an original video sequence having a lower frame frequency. Interpolated frame 1.5 is created from frames 1 and 2 by translating MB's from either frame 1 or frame 2 along MV's. Likewise, interpolated frame 2.5 is created from original frames 2 and 3 by macroblocks being translated along motion vectors. The translated distance to frame 2.5 may be half of the translation distance between frames 2 and 3 for each motion vector.
The final sequence of frames produced by interpolation has double the number of frames, with one interpolated frame inserted after each original frame.
Some foreground objects may be moving faster than the background, such as the honeybee moving toward the flower in the sequence shown in FIG. 1. These foreground objects (objects) have larger motion vectors relative to the motion vectors for background objects (background).
FIG. 2 highlights an object moving relative to a background, occluding covered regions. Object 10 in frame N−1 is in motion relative to the background, shown as a grid. Object 10 from frame N−1 moves to the location of object 10′ in frame N. Object 10 is translated along a motion vector to find the location of object 10 in interpolated frame N−0.5.
The location of object 10 moves lower and to the right for object 10′, as seen when frames N−1 and N are stacked on top of each other as shown in the bottom of FIG. 2. The apparent motion of object 10 creates an uncovered region U, and a covered region C. The uncovered region U is a portion of the background image that was hidden by object 10 in frame N−1 becomes visible in frame N. Likewise, covered region C is a portion of the background image that was visible in frame N−1 but becomes hidden by object 10′ in frame N.
Such covering and uncovering, or occlusion and disocclusion, by object 10 complicates frame interpolation.
FIG. 3 highlights missing motion vectors for covered and uncovered regions. The frames are shown edge-on in FIGS. 3A-B. In FIG. 3A, forward motion vectors 14 point to the locations where macroblocks from frame FN−1 appear in next original frame FN. Object motion vectors 12 for object 10 point to the new location for object 10′ in frame FN. The location of object 10 in interpolated frame FN−0.5 can be determined by translation of macroblocks in object 10 by half of the distance of object motion vectors 12, just as macroblocks for the background image can be located at half the distance of motion vectors 14.
However, object 10 is moving relative to the background. The apparent motion of object 10 causes some of the macroblocks in frame Fn to have no valid motion vector 14. For example, macroblocks just above object 10 in frame FN−1 are covered by object 10′ in frame FN, so these macroblocks have no matching macroblocks in frame FN.
In FIG. 3B, backwards motion vectors 14 point to the locations where macroblocks from frame FN appear in next original frame FN−1. Object motion vectors 12 for object 10′ point to the prior location for object 10 in frame FN−1. The location of object 10 in interpolated frame FN−0.5 can be determined by backward translation of macroblocks in object 10′ by half of the distance of object motion vectors 12, just as macroblocks for the background image can be located at half the distance of motion vectors 14.
Using backwards motion vectors also results in some of the macroblocks in frame FN to have no valid motion vector 14. For example, macroblocks just below object 10′ in frame FN were uncovered by the apparent movement of object 10 to frame FN, so these macroblocks have no matching macroblocks in frame FN−1.
Occlusion and disocclusion may cause problems for frame-rate converters. Since there are no valid motion vectors for covered or uncovered regions, simple motion estimation and macroblock translation breaks down along the edges of moving objects. The edges of objects can appear jumpy rather than move smoothly. Visible artifacts may be created by the frame rater converter when occlusion processes fail. For example, visible artifacts may take the form of a halo around the edges of a moving person's head. Thus these kinds of visible artifacts are sometimes known as halo effects, although they can occur along the edges of any moving object.
Various methods have been used to reduce such halo effects. Sometimes these methods are effective for some objects. but some combinations of objects and backgrounds can cause these methods to fail. For example, when the background is itself complex and changing, the methods may make incorrect assignments, causing some macroblocks from background objects to be placed over foreground objects. Ragged edges or blocky artifacts may also result from incorrect identification of covered and uncovered regions.
Some methods fail when the background is featureless. Detection may fail on object boundaries, resulting in ragged edges. The motion vectors may be inaccurate near covered regions, or may be incorrectly assigned, resulting in further visible artifacts. Computational load may be excessive for these methods of halo reduction.
What is desired is a frame-rate converter that generates interpolated frames with fewer visible artifacts. It is desired to generate macroblocks for interpolated frames in covered and uncovered regions even when motion vectors in both directions are not valid. Reduction of halo effects along the edges of moving objects is desirable.