In general, latency in delivering video content is understood as the time between sending the video content from a video input source and displaying the video content on a video output display. Various methods and specifically device-based solutions for reducing video latency have presented in the past. However, video latency still remains as a problem in all kinds of video content delivery and broadcasting in case of computer video processing systems.
In the past solutions drawing the video content takes place in a back buffer. The past solutions will wait until input side has completely received one input frame, then move the received input frame from input side to CPU (central processing unit, processor) and/or main memory via DMA transfer and memcopy functions. After that the past solutions move the frame forward from CPU to GPU (graphics processing unit) and draw a scene into the back buffer. The past solutions will swap front buffer and back buffer so that GPU starts outputting the new front buffer after the next vertical blanking interval. Front buffer contains the output frames that are currently being sent to video output display. Front frame buffer is being sent to output starting from topmost part of the frame buffer and then continuing to the bottom of the frame. While the front buffer is supplying rendered images to the display, the back buffer is used to store images that are in the process of being rendered by the video graphics circuitry. Once the video graphics circuitry has completed the rendered of the current images and the fully rendered images in the front buffer have been provided to the display, the front and back buffers are flipped. As such, the previous front buffer now becomes the back buffer and is used to store new images as they are rendered while the back buffer provides the rendered images that it stores to the display driver. The front and back buffers continually flip in this manner, which occurs during the blanking interval of the video data such that tearing (i.e., a visible separation of images) does not occur. Typically, the buffers flip at the refresh rate of the display (e.g., 50 Hz, 60 Hz, 75 Hz, and 90 Hz), which is in synchronization with the video graphics circuitry rendering a new frame of data (i.e., images). These solutions are robust and simple but come with more latency and usually drawing will take more time so that the latency varies from two to three frames.
In the past, solutions for reducing video latency have been strongly device and hardware based causing these solutions to be expensive and requiring significant investments in infrastructure. Results that are on the same with present invention have been reached only with hardware based solutions. An effective solution for reducing video latency in computer video processing systems is needed.