Display orientation changes and touch related swipe, pinch, and stretch events, etc., are fairly common experiences in mobile computing devices. These frequent and asynchronous occurrences often result in implicit, clipped two-dimensional (2D) images, leading to partially obscured pixel regions. In conventional techniques that employ graphics processing unit (GPU) runtime/drivers for computing, threads are consistently dispatched to execute kernels on all pixels, regardless of their visible state, where the action of display extent clipping is deferred, and performed in the succeeding render stage. Such conventional techniques are sub-optimal and inefficient in terms of power, computational resources, etc., in a dynamic model of constantly changing geometrical relations of an image to the display extent. Further, the aforementioned sensory events force repeated computation on the GPU and carry a rather significant overhead of attending to invisible pixels.