Digital computers are being used today to perform a wide variety of tasks. A primary means for interfacing a computer system with its user is through its graphics display. The graphical depiction of data, through, for example, full motion video, detailed true color images, photorealistic 3D modeling, and the like, has become a preferred mechanism for human interaction with computer systems. For example, the graphical depiction of data is often the most efficient way of presenting complex data to the user. Similarly, high-performance interactive 3D rendering has become a compelling entertainment application for computer systems.
Computer systems are increasingly being used to handle video streams and video information in addition to high performance 3D rendering. Typical video processing applications utilize computer systems that have been specifically configured for handling video information. Such computer systems usually include dedicated video processing hardware for the processing and handling of constituent video frame data comprising a video stream. Such video processing hardware includes, for example, video processor amplifiers (e.g., procamps), overlay engines (e.g., for compositing video or images), specialized DACs (digital to analog converters), and the like.
Problems exist with the implementation of video processing hardware that is configured to handle multiple video streams. The video technology deployed in many consumer electronics-type and professional level devices relies upon one or more video processors to mix multiple video streams and/or format and/or enhance the resulting video signals for display. For example, when performing video mixing or keying, it is important to align an input video stream to an output video stream before performing the mixing. Even when the systems are in synchronization (e.g., “genlock”), the output video stream can be several pixels offset from the input video stream.
The undesirable offset causes problem with the mixing process. One prior art solution to the offset problem is to buffer the entire frame with external memory, and then perform the mixing of video frame data on the next frame. This solution is problematic because it requires a large amount of external memory to buffer an entire video frame. This solution is also difficult to implement in real-time due to the fact that large amounts of video data need to be accessed in the memory and processed at 30 frames per second or more. Consequently, such solutions are inordinately expensive to implement for high resolution video (e.g., HDTV, etc.). Thus what is required is a solution that can implement a high-quality video stream synchronization while eliminating undesirable offset effects.