When rendering digital images in animations, it is often assumed that the human visual system is perfect, despite limitations arising from a variety of different complexities and phenomena. That is, current methods of real-time rendering of a digital animation may operate on an assumption that a single rendered frame image will be fully visually appreciated at any single point in time. However, peripheral vision may be significantly worse than foveal vision in many ways, and these differences may not be explained solely by a loss of acuity. However, acuity sensitivity still forms a significant portion of peripheral detail loss and can be a phenomena to exploit.
One method of exploitation, termed “foveated rendering” or “foveated imaging,” implements a high-resolution render of a particular region of individual frame images. A users gaze may be tracked so that the high-resolution render is positioned on the images to correspond with a user's foveal region. An area surrounding the high-resolution region is then rendered at relatively lower resolution. However, users may experience visual anomaly when prompted about the fact. Other techniques have implemented a foveated rendering method with spatial and temporal property variation. With such techniques, at a certain level-of-detail (LOD), users may experience the foveated renders to be of equal or higher quality than non-foveated counterparts.
With the increasing use of 4K-8K UHD displays and the push towards higher pixel densities for head-mounted displays, the industry is pressured to meet market demands for intensive real-time rendering.