Computer-generated images are often created by examining a geometric model of a view space and modeled objects in the view space. The geometric model of the objects can have arbitrary resolution, but typically each object is represented by a finite number of polygons, such as triangles, positioned in the view space and having a color, color pattern, or texture over their surface and/or an alpha value or values representing transparency of the polygon. An image is typically output (i.e., stored, displayed, transmitted, or otherwise processed) as a pixel array.
Some scenes may include objects that are transparent or partially transparent. Rendering transparent (and partially transparent) objects has proven to be difficult in real-time, particularly with rasterized rendering. In some conventional implementations to real-time rendering, all shadows in a scene are rendered as opaque shadows, which means that each object produces a dark shadow, regardless of the transparency of the object. Using opaque shadows, however, produces visually incorrect results for transparent objects.
In some instances, fully ray tracing a scene (i.e., tracing rays from the light source(s) into the scene) can solve the problems with rendering shadows for transparent objects discussed above. However, in some applications, such as video games, computational speed is a priority. In video games, frames are rendered very quickly, i.e., in real-time or near real-time, as a user interacts with a video game. As such, conventional ray tracing techniques (which can take hours or days to render a single frame) for rendering shadows for transparent objects are typically not suitable for video games.
As such, there remains a need in the art for a system and method for rendering shadows for transparent or translucent objects that overcome the drawbacks and limitations of existing approaches.