Moving video sources from lower frame rates to higher frame rates requires the generation of new frames of video data between the current, already existing frames. The new frames typically result from an interpolation process in which the pixels of the new frames are computed from a current frame, CF, and a previous frame, P1. More than one frame may be interpolated between the two frames. The interpolation process is a scaling operation in the temporal domain, therefore the location for the interpolated frames refer to as phases rather than frames.
The interpolation process must account for movement of objects in the video data between frames. An object in motion will have pixels that depict that object in different locations in the current frame than the locations of those pixels in the previous frame. Motion estimation and motion compensation techniques use estimated motion to estimate the resulting position of those pixels in the current frame.
Using true motion information improves the resulting image quality because the motion information used is the actual motion information, not estimated motion information. Even using true motion, problems may still arise. For example, even state-of-the-art automatic motion vector calculation cannot generate the true motion fields at the interpolation phase perfectly. This results in annoying artifacts at the interpolation phase. In another example, current frame interpolation methods have difficulties in perfecting object cover/uncover analysis. Other methods are needed that generate high quality frame interpolation results.