The present invention pertains to the field of computer graphics systems. More particularly, this invention relates to a computer graphics system and method of rendering a scene based upon synthetically generated texture maps.
A typical computer graphics system includes a display device having a two-dimensional (2D) array of light emitting areas. The light emitting areas are usually referred to as pixels. Such a computer graphics system typically implements hardware and/or software for generating a 2D array of color values that determine the colors that are to be emitted from the corresponding pixels of the display device.
Such computer graphics systems are commonly employed for the display of three-dimensional (3D) objects. Typically, such a computer graphics system generates what appears to be a 3D object on a two dimensional (2D) display device by generating 2D views of the 3D object. The 2D view of a 3D object which is generated at a particular time usually depends on a spatial relationship between the 3D object and a viewer of the 3D object at the particular time. This spatial relationship may be referred to as the view direction.
U.S. utility application entitled, xe2x80x9cDIRECTION-DEPENDENT TEXTURE MAPS IN A GRAPHICS SYSTEMxe2x80x9d having Ser. No. 09/329,553 filed Jun. 10, 1999 now U.S. Pat. No. 6,297,834, discloses a method for generating texture maps in a graphics system, and is hereby incorporated by reference. The process by which a computer graphics system generates the color values for a 2D view of a 3D object is commonly referred to as image rendering. A computer graphics system usually renders a 3D object by subdividing the 3D object into a set of polygons and rendering each of the polygons individually.
The color values for a polygon that are rendered for a particular view direction usually depend on the surface features of the polygon and the effects of lighting on the polygon. The surface features include features such as surface colors and surface structures. The effects of lighting usually depend on a spatial relationship between the polygon and one or more light sources. This spatial relationship may be referred to as the light source direction.
Typically, the evaluation of the effects of lighting on an individual pixel in a polygon for a particular view direction involves a number of 3D vector calculations. These calculations usually include floating-point square root and divide operations. Such calculations are usually time consuming and expensive whether performed in hardware or software.
One prior method for reducing such computation overhead is to evaluate the effects of lighting at just a few areas of a polygon, such as the vertices, and then interpolate the results across the entire polygon. Examples of these methods include methods that are commonly referred to as flat shading and smooth shading. Such methods usually reduce the number of calculations that are performed during scan conversion and thereby increase rendering speed. Unfortunately, such methods also usually fail to render shading features that are smaller than the areas of individual polygons.
One prior method for rendering features that are smaller than the area of a polygon is to employ what is commonly referred to as a texture map. A typical texture map is a table that contains a pattern of color values for a particular surface feature. For example, a wood grain surface feature may be rendered using a texture map that holds a color pattern for wood grain.
Unfortunately, texture mapping usually yields relatively flat surface features that do not change with the view direction or light source direction. The appearance of real 3D objects, on the other hand, commonly do change with the view direction and/or light source direction. These directional changes are commonly caused by 3D structures on the surface of a polygon. Such structures can cause localized shading or occlusions or changes in specular reflections from a light source. The effects can vary with view direction for a given light source direction and can vary with light source direction for a given view direction.
One prior method for handling the directional dependence of such structural effects in a polygon surface is to employ what is commonly referred to as a bump map. A typical bump map contains a height field from which a pattern of 3D normal vectors for a surface is extracted. The normal vectors are usually used to evaluate lighting equations at each pixel in the surface. Unfortunately, such evaluations typically involve a number of expensive and time-consuming 3D vector calculations, thereby decreasing rendering speed or increasing graphics system hardware and/or software costs.
In applications such as 3D computer-generated animations, a scene is composed of multiple sequential frames of imagery. During the process of creating a computer-generated graphics presentation, such as, for example, in animations or movies, a sequence of scenes depicting various environments and objects is created and sequentially assembled to form a complete presentation which is displayed to a user via a display device on a sequential basis scene by scene.
Each scene may be composed of a sequence of frames. A frame is typically a 2D static representation of a 3D or 2D object within a defined environment.
Each frame may present a 3D or 2D object or objects from a particular viewing (camera) angle or as illuminated from a particular lighting angle. From frame to frame of the scene, such things as camera angle or lighting angle may change thereby giving the scene a dynamic feel through a sense of motion or change. For example, an object may be viewed in one frame from a head-on viewing position while in a second sequential frame, the same object is viewed from a left side viewing position. When the two frames are viewed in sequence, the object appears to turn from a straight forward position to a position facing to the right-hand side of the object. The process of creating a scene involves assembling a series of images or frames. During this process, it is common for the creator/editor to preview the scene in order to determine progress or status of work done on the scene to that point.
With 3D objects where environments are represented in the scene, each frame will be rendered to add realism and 3D qualities such as shadows and variations in color or shade. In a computer graphics system, this rendering process is computationally intensive and can take significant time to complete, depending on the level of 3D quality desired for the preview, scene display and/or the power of the computer hardware used to carry out the rendering computations. As a result, it is common for creators/authors to opt for a lower level of detail in 3D quality when carrying out a preview of a scene. Some examples of lower level scene quality include wire frame presentations or low-resolution texture mapping. While this does allow the creator/author to preview a general representation of the scene in less time, it falls far short of providing a preview of a true representation of the scene as it will appear in final form.
In order to provide a realistic appearance to the scene and objects therein, texture mapping techniques are used to provide such things as shadow, highlights and surface texture to the objects and scene surfaces through the process of rendering.
Techniques have been proposed for generating an image based texture map in which a scene and/or an object within a scene are photographed for multiple pre-defined camera positions via, for example, a digital or film-based still camera. A variation on this technique holds the camera position constant while a light/illumination source is moved so as to illuminate the object from different angles thereby casting different shadows and highlights on the object. While this technique is useful, it is quite time consuming to generate a series of images for use in generating an image-based texture map, and of course requires the actual physical object or a model of it.
Typically, at some point during the process of creating a computer-generated scene, the user (creator) wants to view the scene to determine if it is as desired. To do so in a manner that presents a photorealistic appearance to the scene, each frame is typically rendered in accordance with a predefined texture map. As the process of rendering a scene can require considerable time to carry out due to the computational complexities of rendering the scene, it is common for users to select a lower level of scene quality for representation during the xe2x80x9cpreviewxe2x80x9d of the scene.
Thus, a heretofore unaddressed need exists in the industry to address the aforementioned deficiencies and inadequacies.
The present invention provides a system and method for rendering a synthetic or computer-generated scene based on a predetermined set of base images. Briefly described, in architecture, the system can be implemented as follows. A storage memory is provided for storing a texture map generated in accordance with a predetermined number of base images; a rendering unit is provided for rendering a scene in accordance with the image based texture map and then outputting the rendered scene for display on a display device. A controller is provided for controlling the rendering unit.
A further embodiment of the invention may be described as follows. A base image generator is provided for sampling a scene and generating a plurality of base images. A texture map generator is provided for generating a texture map in accordance with the plurality of base images. A storage service is provided for storing the texture map; and a rendering unit is provided for rendering the scene in accordance with the texture map.
The present invention can also be viewed as providing a method for rendering a photo realistic scene. In this regard, the method can be broadly summarized by the following steps: a scene composed of a synthetic environment and a synthetic object is defined; a predetermined number of base images are generated in accordance with predefined scene parameters. A texture map is created based on the predetermined number of base images; and the scene is rendered in accordance with the texture map.
Additionally the present invention may be viewed as providing a method of generating photorealistic imagery that may be summarized by the following steps: a scene composed of a synthetic environment and a synthetic object is defined; a predetermined number of base images are generated in accordance with predefined scene parameters. Data representing the base images are stored to memory. Data representing the base images is retrieved from memory and a texture map is created based on the predetermined number of base images. The scene is rendered in accordance with the texture map and displayed for viewing.
Other features and advantages of the present invention will be apparent from the detailed description that follows.