This invention relates generally to computer graphics. More particularly, this invention relates to techniques for rendering synthetic objects into real scenes in a computer graphics context.
The practice of adding new objects to photographs dates to the early days of photography in the simple form of pasting a cut-out from one picture onto another. While the technique conveys the idea of the new object being in the scene, it usually fails to produce an image that as a whole is a believable photograph. Attaining such realism requires a number of aspects of the two images to match. First, the camera projections should be consistent, otherwise the object may seem too foreshortened or skewed relative to the rest of the picture. Second, the patterns of film grain and film response should match. Third, the lighting on the object needs to be consistent with other objects in the environment. Lastly, the object needs to cast realistic shadows and reflections on the scene. Skilled artists found that by giving these considerations due attention, synthetic objects could be painted into still photographs convincingly.
In optical film compositing, the use of object mattes to prevent particular sections of film from being exposed made the same sort of cut-and-paste compositing possible for moving images. However, the increased demands of realism imposed by the dynamic nature of film made matching camera positions and lighting even more critical. As a result, care was taken to light the objects appropriately for the scene into which they were to be composited. This would still not account for the objects casting shadows onto the scene, so often these were painted in by artists frame by frame. Digital film scanning and compositing helped make this process far more efficient.
Global illumination work has recently produced algorithms and software to realistically simulate lighting in synthetic scenes, including indirect lighting with both specular and diffuse reflections. Some work has been done on the specific problem of compositing objects into photography. For example, there are known procedures for rendering architecture into background photographs using knowledge of the sun position and measurements or approximations of the local ambient light. For diffuse buildings in diffuse scenes, the technique can be effective. The technique of reflection mapping (also called environment mapping) produces realistic results for mirror-like objects. In reflection mapping, a panoramic image is rendered or photographed from the location of the object. Then, the surface normals of the object are used to index into the panoramic image by reflecting rays from the desired viewpoint. As a result, the shiny object appears to properly reflect the desired environment. (Using the surface normal indexing method, the object will not reflect itself. Correct self-reflection can be obtained through ray tracing). However, the technique is limited to mirror-like reflection and does not account for objects casting light or shadows on the environment.
A common visual effects technique for having synthetic objects cast shadows on an existing environment is to create an approximate geometric model of the environment local to the object, and then compute the shadows from various manually specified light sources. The shadows can then be subtracted from the background image. In the hands of professional artists this technique can produce excellent results, but it requires knowing the position, size, shape, color, and intensity of each of the scene""s light sources. Furthermore, it does not account for diffuse reflection from the scene, and light reflected by the objects onto the scene must be handled specially.
Thus, there are a number of difficulties associated with rendering synthetic objects into real-world scenes. It is becoming particularly important to resolve these difficulties in the field of computer graphics, particularly in architectural and visual effects domains. Oftentimes, a piece of furniture, a prop, or a digital creature or actor needs to be rendered seamlessly into a real scene. This difficult task requires that the objects be lit consistently with the surfaces in their vicinity, and that the interplay of light between the objects and their surroundings be properly simulated. Specifically, the objects should cast shadows, appear in reflections, and refract, focus, and emit light just as real objects would.
Currently available techniques for realistically rendering synthetic objects into scenes are labor intensive and not always successful. A common technique is to manually survey the positions of the light sources, and to instantiate a virtual light of equal color and intensity for each real light to illuminate the synthetic objects. Another technique is to photograph a reference object (such as a gray sphere or a real model similar in appearance to the chosen synthetic object) in the scene where the new object is to be rendered, and use its appearance as a qualitative guide in manually configuring the lighting environment. Lastly, the technique of reflection mapping is useful for mirror-like reflections. These methods typically require considerable hand-refinement and none of them properly simulates the effects of indirect illumination from the environment.
Accurately simulating the effects of both direct and indirect lighting has been the subject of research in global illumination. With a global illumination algorithm, if the entire scene were modeled with its full geometric and reflectance (BRDF) characteristics, one could correctly render a synthetic object into the scene simply by adding to the model and re-computing the global illumination solution. Unfortunately, obtaining a full geometric and reflectance model of a large environment is extremely difficult. Furthermore, global illumination solutions for large complex environments are extremely computationally intensive.
Moreover, it seems that having a full reflectance model of the large-scale scene should be unnecessary: under most circumstances a new object will have no significant effect on the appearance of most of the distant scene. Thus, for such distant areas, knowing just its radiance (under the desired lighting conditions) should suffice.
The patent application entitled xe2x80x9cApparatus and Method for Recovering High Dynamic Range Radiance Maps from Photographsxe2x80x9d, Ser. No. 09/126,631, filed Jul. 30, 1998, (hereinafter referred to as xe2x80x9cthe Debevec patentxe2x80x9d) introduces a high dynamic range photographic technique that allows accurate measurements of scene radiance to be derived from a set of differently exposed photographs. The patent application is assigned to the assignee of the present invention and is incorporated by reference herein. The technique described in the patent application allows both low levels of indirect radiance from surfaces and high levels of direct radiance from light sources to be accurately recorded. When combined with image-based modeling and rendering techniques such as view interpolation, projective texture mapping, and possibly active techniques for measuring geometry, these derived radiance maps can be used to construct spatial representations of scene radiance.
The term light-based model refers to a representation of a scene that consists of radiance information, possibly with specific reference to light leaving surfaces, but not necessarily containing reflectance property (BRDF) information. A light-based model can be used to evaluate the 5D plenoptic function P(xcex8, xcfx86, Vx, Vy, Vz,) for a given virtual or real subset of space, as described in Adelson, et al., xe2x80x9cComputational Models of Visual Processingxe2x80x9d, MIT Press, Cambridge, Mass., 1991, Ch. 1. A material-based model is converted to a light based model by computing an illumination solution for it. A light-based model is differentiated from an image-based model in that its light values are actual measures of radiance, whereas image-based models may contain pixel values already transformed and truncated by the response function of an image acquisition or synthesis process.
It would be highly desirable to provide a technique for realistically adding new objects to background plate photography as well as general light-based models. The synthetic objects should be able to have arbitrary material properties and should be able to be rendered with appropriate illumination in arbitrary lighting environments. Furthermore, the objects should correctly interact with the environment around them; that is, they should cast the appropriate shadows, they should be properly reflected, they should reflect and focus light, and they should exhibit appropriate diffuse inter-reflection. Ideally, the method should be carried out with commonly available equipment and software.
A method of placing an image of a synthetic object into a scene includes the step of establishing a recorded field of illumination that characterizes variable incident illumination in a scene. A desired position and orientation of a synthetic object is specified within the scene with respect to the recorded field of illumination. A varying field of illumination caused by the synthetic object is identified within the scene. Synthetic object reflectance caused by the synthetic object is simulated in the scene. The synthetic object is then constructed within the scene using the varying field of illumination and the synthetic object reflectance.
The technique of the invention allows for one or more computer generated objects to be illuminated by measurements of illumination in a real scene and combined with existing photographs to compute a realistic image of how the objects would appear if they actually had been photographed in the scene, including shadows and reflections on the scene.. A photographic device called a light probe is used to measure the full dynamic range of the incident illumination in the scene. An illumination technique (for example, a global illumination technique) is used to illuminate the synthetic objects with these measurements of real light in the scene. To generate the image of the objects placed into the scene, the invention partitions the scene into three components. The first is the distant scene, which is the visible part of the environment too remote to be perceptibly affected by the synthetic object. The second is the local scene, which is the part of the environment that will be significantly affected by the presence of the objects. The third component is the synthetic objects. The illumination algorithm is used to correctly simulate the interaction of light amongst these three elements, with the exception that light radiated toward the distant environment will not be considered in the calculation. As a result, the reflectance characteristics of the distant environment need not be knownxe2x80x94the technique uses reflectance characteristics only for the local scene and the synthetic objects. The challenges in estimating the reflectance characteristics of the local scene are addressed through techniques that result in usable approximations. A differential rendering technique produces perceptually accurate results even when the estimated reflectance characteristics are only approximate. The invention also allows real objects and actors to be rendered into images and environments in a similar manner by projecting the measured illumination onto them with computer-controlled light.