Conventionally, image frames are rendered to allow display thereof by a display device. For example, a 3-dimensional (3D) virtual world of a video game may be rendered by a graphics processing unit (GPU) to show image frames having a corresponding 2-dimensional (2D) perspective. In any case, the time to render each image frame (i.e., the rendering rate of each frame) is variable depending on the computational complexity. For example, the rendering rate may depend on the number of objects in the scene shown by the image frame, the number of light sources, the camera viewpoint/direction, etc.
Unfortunately, the refresh rate of a display device has generally been independent of the rendering rate. For example, currently video is designed to playback at fixed rates of 24 Hz, 60 Hz, etc. That is, video is displayed at a fixed rate no matter the rendering rate, which is variable. This has resulted in limited schemes being introduced that attempt to compensate for any discrepancies between the differing rendering and display refresh rates.
By way of example, a vertical synchronization-on (vsync-on) mode and a vertical synchronization-off (vsync-off) mode are techniques that have been introduced to compensate for any discrepancies between the differing rendering and display refresh rates. In practice, these modes have been used exclusively for a particular application, as well as in combination where the particular mode selected can be dynamically based on whether the GPU render rate is above or below the refresh rate of the display device.
However, vsync-on and vsync-off have exhibited various limitations. For instance, when a display device is operating in a vsync-on mode, an already rendered image frame will have to wait until the end of a refresh cycle before that image frame is thrown up for display. More particularly, when the GPU render rate of an image frame is slower than the display device refresh rate (e.g., 60 Hz), then the effective refresh rate is halved, because an image may be shown twice over two refresh cycles. Also, when the GPU render rate is faster than the display device refresh rate, then there is still latency introduced, as the finished image frame must still wait till the end of the refresh cycle before being shown. As such, rendered video is not immediately put up for display when operating in vsync-on mode.
In the other case, when a display device is operating in vsync-off mode, the GPU starts sending the pixels of an image frame to the display device as soon as the rendering is complete. In addition, the GPU abandons sending pixels from an earlier image frame. In this case, the GPU need not wait before rendering the next image frame, as the buffer is immediately flushed. As a result, in vsync-off mode, there is less latency, and faster rendering. However, because the GPU immediately begins to send pixel information for an image frame that has completed rendering, the display device may show a “tear line” where the newly rendered frame is written to the display in the middle of a refresh cycle. That is, pixels from a previous image frame are shown on one side of the tear line, while pixels from the new image frame are shown on the other side of the tear line. The tear line is especially noticeable when an object in the rendered scene over multiple image frames is moving. As result, part of the object is below the tear line, and part of the object is above the tear line. Both parts are displaced from each other, and the object appears torn.
There is a need for addressing these and/or other issues in the prior art.