U.S. patent application Ser. No. 09/264,347, filed Mar. 8, 1999 (entitled xe2x80x9cParallel Pipelined Merge Enginesxe2x80x9d; describes apparatus and methods that could be used to implement some of the methods described herein. That application (hereinafter xe2x80x9cHeirichxe2x80x9d) is incorporated herein by reference for all purposes.
The present invention relates to image generation in general and in particular to methods and apparatus for quickly generating photo-realistic images.
Rendering is the process of computing a two-dimensional viewpoint dependent image of a three-dimensional geometric model. The geometric model specifies the position, shape and color of objects and light sources in a world space. The geometric model might also specify textures for objects, where a texture is a surface coloring. Textures are used, for example, to apply wood grain to a rectangular prism or to map a video image onto the surface of a sphere.
One approach to rendering is a two-stage approach, comprising a geometry stage and a rendering stage. In the geometry stage, a database of geometric descriptions of objects and light sources (the geometric model) is transformed from a world coordinate system (xe2x80x9cobject spacexe2x80x9d) into a view-dependent coordinate system (xe2x80x9cscreen spacexe2x80x9d), taking into account a view point and a view surface. The view point is a point in the object space from where the objects are being viewed (if the objects and the object space actually existed) to obtain the image. The view surface is the surface in the world space that corresponds to the screen space.
In the rendering stage, once the positions of the objects in screen space are determined, each pixel is xe2x80x9ccoloredxe2x80x9d, i.e., assigned a color value from an available palette, by determining which objects are visible from the view point within the bounds of the view surface and then determining the color of the light that is emitted or reflected from the visible objects in the direction of the view point.
A typical world space is a three-dimensional (3D) coordinate space where position is measured by floating point numbers, whereas a typical screen space is a 2D array having discrete positions defined by pixel locations. For example, objects might be specified by 32-bit values for positions in the world space and be mapped to a screen space defined by a 640 by 480 pixel array, with the pixel color value for each pixel being a 24-bit, or more, value. A typical photorealistic image might require a resolution of 2000 pixels by 1000 pixels and contain a million or more separately specified scene elements (i.e., objects or light sources). While days might be available for rendering some images, many applications require real-time rendering, which generally requires specialized hardware (software image generation is usually not fast enough) and may even require that the hardware perform parallel processing.
The process of rendering to generate an image from a geometric model and a view point/surface using hardware is highly developed and many hardware graphics accelerators (HGA""s) are available that can quickly render objects into images, using the above-described two-stage process or other processes. Such products often support standard command sets, such as the OpenGL API (application programming interface), a low-level programming interface that is closely related to the architecture of the HGA. This standardization makes those products convenient for development of rendering engines. Another command set that is becoming a standard is the Direct3D API developed by Microsoft Corporation. Direct3D is a higher level programming interface that has been used on computers primarily for games.
Inexpensive rendering cards for personal computers are able to render about one million triangles per second, but are typically limited to less complex geometric models. For example, many HGA""s that are controlled through an OpenGL API only account for direct lighting and do not account for multiple reflections and shadows (occlusion). Consequently, designers of photorealistic image generation systems have had to forgo the use of commodity hardware and were limited to expensive, high-performance rendering engines.
The present invention overcomes several disadvantages of the prior art methods and apparatus for generating images. In one embodiment of an image generator according to the present invention, a set of light sample points is computed, wherein each light sample point is a point on a light source from a geometric model, and an irradiance image is computed for each light sample point, wherein an irradiance image is a view-dependent image taken with the light sample point being the view point and the light source for the irradiance image. From the irradiance images, the image generator creates an irradiance texture for each object in a set of objects being considered the scene and the image generator renders the image of the objects in the set of objects with each object""s coloring determined, at least in part, from the object""s irradiance texture. Depending on performance requirements, one or more operation of the image generator is parallelized.
A further understanding of the nature and advantages of the inventions herein may be realized by reference to the remaining portions of the specification and the attached drawings.