Rendering and displaying a computer-generated scene on a display has increased in importance in areas such as gaming, modeling, and movies. Rendering is a computationally expensive process in which the entire scene's spatial, textural, and lighting information are combined to determine the color value of each pixel in the rendered image. Graphics processing devices performing the rendering, however, have limited processing power and memory capacity. These limits make the rendering of the scene, especially real-time rendering, a challenging task.
To speed up the rendering process, foveated rendering is sometimes employed. Foveated rendering uses an eye tracker to reduce the rendering workload based on the observation that human vision typically focuses on the portion of the screen near the gaze point whereas the visual acuity drops dramatically in the peripheral vision (i.e. the area outside of the zone gazed by the fovea). In foveated rendering, the content in an area near the gaze point of a user, also referred to herein as a “foveated region,” is rendered with high quality, whereas the content outside the foveated region, referred to as “non-foveated region,” is rendered with lower quality. As the user's gaze point moves, the images are re-rendered accordingly to match the new location of the gaze point.
However, the high-quality area of the rendered image displayed to a user does not always match the foveated region of the user. Reasons for this mismatch include the latency introduced by the rendering process, and the latency and inaccuracy in the gaze point estimation, especially in scenarios where saccade or blinking has occurred. As a result, the image rendered and displayed to a user was generated based on a gaze point estimated for the user tens of milliseconds ago. Consequently, the content that is projected to the user's foveated region at the time of display may be rendered in low quality causing unpleasant experience to the user.