Three-dimensional (3D) computing games and benchmarks typically spend a majority of the frame time on computing the appearance (shading) of each pixel, where shading is determined based on material properties and light sources. These lighting computations are often expensive, such as in order of hundreds or even thousands of shader instructions per pixel. With high-resolution displays, next-generation virtual reality (VR) headsets, etc., having both high resolution and high refresh rates, the shading cost becomes prohibitive for low/medium-powered graphics devices. Further, with eye tracking hardware becoming more widespread, such as in case of next-generation VR/augmented reality (AR) computing devices, their shading rate continues to be significantly high even in the periphery where the user is not looking. In order to reduce the shading cost and make rendering on such devices feasible, it is desirable to exploit certain characteristics of the rendered image to avoid/reduce expensive computations, such as often large parts of a rendered image are smooth or of low contrast.
Deferred shading (also known as deferred lighting) is a prevalent rendering technique is today's application; however, with this technique, applications uniformly lower the rendering/shading resolution and then up-scale the relevant images before display. This results in substantial reduction in image quality as image features, including sharp edges and high-frequency details, are under-sampled.