The invention relates to systems for the production of rendered 2-D images derived from 3-D scene data using computers and more particularly to a system that automates the production of separate sequences of rendered images, known as passes, which are to be used with compositing software to form a completed image or image sequence.
Modern computer graphics, as often seen in movies and computer-generated artwork, consist of two-dimensional (2-D) images or image sequences (movies) that are derived from complex 3-D data. The 3-D scene data includes the 3-D coordinates of every object in a scene. Since the images derived from the scene are intended to show a realistic representation of actual 3-D objects, the scene data also includes objects or definitions, called xe2x80x9cshaders,xe2x80x9d that are used to control rendering related properties of objects and the scene as a whole, for example, surface and volume properties of each object. For instance, the shaders dictate how light is reflected, refracted, and scattered by the objects. Shaders can also be used to control the rendering properties of internal volumes of space (e.g., a 3D object that delimits a puff of smoke) or the entire scene environment, the latter being called an environmental shader.
To make the realistic image, the 3-D scenes are rendered. The process of rendering involves ray-tracing which determines the look of each pixel visible from the camera viewpoint. In ray-tracing, the effects of occultation and diffuse and specular reflection, refraction, and diffusion of light by the various objects and volumes in the scene are determined. Ray tracing not only accounts for primary effects which are the reflections, refractions, and diffusions of light coming directly from the light sources, but also for secondary reflections. The latter are effects when primary light from other objects illuminates or passes through an object or volume. These secondary effects can involve multiple reflections between the original light source and the camera so that, considering that rays must be traced for every pixel in a scene and considering that some shaders involve complex numerical algorithms, the process of rendering is extremely time consuming for current computer technology.
To speed up the process of authoring such images and image-sequences (the latter corresponding to animation as opposed to still images), graphic artists generate images that include particular features of the final image which, when combined (or perhaps with others if not all essential passes are generated), form a complete image or image sequence. For example, a so-called matte pass shows only the outline of a first object. That is, it shows only the parts of the objects behind the first object that are not occulted by it. In such a pass, the first object might appear solid white with no surface features at all. Another pass could be a shadow pass showing only the shadow created by an object or group of objects. These passes are combined to form a final image in a process called compositing.
Breaking a final rendered image into these passes and subsequently compositing the passes allows an intermediate process, prior to compositing, where specific features of the final image may be modified by editing the pass images using pixel editing software. Various features may be tweaked without going back to the original 3-D models. For example, the darkness or hue of a shadow may be tweaked by editing a shadow pass image. The subsequent process of compositing is performed quickly to provide a full final image. The artist can then return to the passes to make further changes and again re-composite to see the results. Since the compositing operation, which starts with the passes, runs very quickly, this process of tweaking pixel properties can be done iteratively and quickly to refine the images. The alternative, for the example changing the shadow, would require changing the lighting in the 3-D scene data to produce the desired effect. This would require a re-rendering of the entire scene, which takes a long time.
The following are various types of passes that can be created. A beauty pass is a full rendering of a selected object of a group of objects. The beauty pass renders the entire scene with no modifications. A matte pass shows an outline of a selected object with the surface of the object appearing uniformly which, so that it demarcates a silhouette of the object. The background and non-selected objects are invisible in the matte pass. A shadow pass shows only a shadow generated by an object with the object generating the shadow and other objects (as well as the background) not appearing upon rendering. A highlight pass shows only the surfaces of selected objects that appear bright due to specular reflection with the rest of the scene flagged as invisible. A transparency pass is a beauty pass of one or more transparent selected objects with the rest of the scene flagged as invisible. A refraction pass shows only light refracted through one or more selected objects with the rest of the scene flagged as invisible. This list is by no means exhaustive and is provided by way of example to facilitate the purposes of this disclosure.
Obviously, only some modifications can be made efficiently by pixel-editing the pass images. Certain modifications are only efficiently done by returning to the 3-D scene. For example, if the shape of an object must be changed, the complex modifications that have to implemented such as highlighting, shading, and the details of the shape, require editing of the 3-D model of the object defined in the scene.
Referring to FIG. 1, the process of generating passes after creating or editing the step of three-dimensional scene 10 involves several steps. First, a number of copies of the scene are made 15a-15n. Then each copy is edited 20a-20n to modify the properties as appropriate for the respective pass to be generated. For example, to generate the matte pass for a first object, the user sets the surface shaders of the object such that the object will appear totally white when this version of the scene is rendered. The same is done for each object for which a matte pass is to be generated. For another example, to generate a beauty pass for the first object, another copy of the scene is made and modified to set the shaders of all but the first object transparent or xe2x80x9czero-alpha.xe2x80x9d Thus, each time a scene is edited, a copy is made for each pass and pass-specific parameters of the copy set to generate each particular pass. This is because changes in the 3-D scene may (and probably do) affect every single pass. Next, each edited copy is rendered to create the particular pass. The user works with the passes in step 45 and may decide to edit the 3-D scene. The process of generating a new set of passes is the same as before. The author returns to the step of editing the scene 10 and follows the same procedure to produce a new set of passes. This process of editing copies of scenes to generate passes may be tedious and time consuming. The tasks indicated by dark-bordered boxes are labor-intensive activities. Each must be repeated every time the scene is edited.
The invention is a system for method for automating the process of creating pass-definition (passes) from a three-dimensional scene. The invention allows a user to define, in a separate step, a series of pass definitions. Each pass definition includes properties that override those defined in the scene. When a given pass definition is applied to the 3-D scene, the system automatically changes the scene according to the pass definition. Once the pass is set as active, the scene is rendered to produce a respective pass-image or image sequence, or, simply, xe2x80x9cpassxe2x80x99.
Consider, for example, the steps for making the matte pass. In the invention, the definition of a matte pass is broken out as a separate process. The properties may be assigned to objects through object-groupings called partitions. Thus, the pass definition may identify a particular partition and a particular set of property values per partition. All objects defined in the scene as belonging to that particular partition will then inherit the properties of the partition. Partitions may be hierarchical with parent partitions passing their properties by inheritance to respective child partitions, which pass on their properties to objects, or further progeny partitions. For example, in a matte pass definition for a first constellation of objects defined as belonging to a first partition, the first partition is identified along with the override property and value to be applied. So the definition might say partition 1, shader hue =white, transparency =0%, or some more technically efficient equivalent. These new values override the shaders applied in the scene definition. So if partition 1 is a group of dinosaur scales that are defined with all sorts of colors and reflectivities in the scene definition, all these shaders are replaced by the shaders of the partition. This causes the dinosaur to appear totally white upon rendering. In the same pass definition, the properties of all other objects are automatically overwritten so that these objects are invisible upon rendering. The resulting matte pass can then be used for tweaking of the 2-D image and/or compositing as taught in the prior art.
The various passes attached to a scene are maintained on the system to allow them to be manipulated and re-composited as desired. Compositing is done conventionally and outside the system defined by the invention. As mentioned, the single scene description is used to render the various pass-specific images or image-sequences.
An important motivation for automating the process of generating render passes can be found in considering the in-context rendering invention described a copending US Patent Application entitled xe2x80x9cA System for Editing Complex Visual Data Providing a Continuously Updated Rendering,xe2x80x9d the entirety of which is incorporated herein by reference. The in-context rendering system provides a rendering in the authoring environment itself. This helps the author tune geometric and non-geometric (surfaces, light-diffusion effects, light intensity, etc.) properties of the scene by providing continuous feedback on how a final rendering is affected by the author""s modifications to the scene. The in-context rendering system allows the author to focus on particular geometric features of the scene by tailoring the render region image""s size and the objects shown. Automating the process of making passes allows a step of filtering the scene through a currently-selected pass definition before applying the rendering technology described in the in-context rendering disclosure. This means that the rendering displayed in the authoring environment can be formed according to any selected pass definition. This allows the author to focus even more narrowly on certain features of the scene as the author works. For example, the author can select a highlight pass as current and tune the 3-D parameters that give rise to the highlight. Compare this to the prior art process of editing the 3-D scene data to obtain the highlight, rendering a preview of the edited scene, and then going back and editing the 3-D scene again. Thus, the automation of forming rendering passes provides a tool in the authoring environment that goes beyond merely reducing the labor involved in creating passes for purposes of pixel-tweaking the image. It also provides the author the option of invoking a new perspective, a different kind of immediate feedback, right in the authoring environment. This feedback enhances the author""s ability to focus on specific features of a scene as the author edits the 3-D scene itself.
The system automates the production of so-called pass-image sequences (or just xe2x80x9cpassesxe2x80x9d) from data defining 3-D scenes. The invention automates the production of passes by filtering the 3-D scene through pre-specified pass definitions that override properties of the 3-D scenes. The results of filtering are rendered (rendering largely comprises the process of ray-tracing) to form the passes. The system may store numerous pass definitions. Each time the 3-D scene is edited, the passes can be produced automatically from the pass definitions. This automation of pass production also allows the passes to be used in the authoring environment by allowing a pass preview of the 3-D scene rendering. The automation of pass-production provides a tool to expedite editing of the 3-D scene by providing the pass preview in the authoring environment. This automation also allows passes to be produced without the substantial manual labor ordinarily required in editing the 3-D scene.
According to an embodiment the invention provides a method for creating a two-dimensional image from a three-dimensional scene. The steps include defining a scene including geometry and a first surface characteristic definition of at least one object. A rendering of the scene produces an image of the object which is determined, at least in part, by the first surface characteristic. A result of the defining step is stored in a computer. The user may then edit a pass definition, a result of the editing being a pass definition that includes a second surface characteristic definition of the object. The surface characteristics may be any non-geometric property. A result of the editing is stored in the computer. As the user activates a particular pass, the partition properties overlay the original properties of the object(s). The rendering is generated in the context of the active pass.
According to another embodiment, the invention is a method for creating and working with pass images (or image sequences) from a three-dimensional scene. The steps include storing a scene including scene geometric and scene non-geometric properties of objects. The next step in this embodiment is storing pass definitions, each including at least one override non-geometric property of at least one of the objects. The next step is filtering a respective copy of the scene using each of the stored pass definitions such that the at least one override non-geometric property is used to determine an appearance of a rendering of the respective pass. A result of the step of filtering is that a scene non-geometric property of the respective copy is superseded by the override non-geometric property such that the scene non-geometric property is not used to determine an appearance of the rendering of the respective pass. The last step may consist of rendering of the pass producing images of the objects which are determined, at least in part, by the non-geometric properties.
According to yet another embodiment, the invention is a method of providing an image of a scene. The steps are: receiving scene definition data and storing the scene definition data, the scene definition data including geometric and non-geometric properties; receiving pass definition data and storing the pass definition data the pass definition data including non-geometric properties; rendering a view of a scene defined by the stored scene definition data, to produce an image, responsively to the stored scene definition data and the stored pass definition data. The rendering is such that a non-geometric property in the scene definition is replaced by a non-geometric property in the pass definition. Another step is displaying the image while accepting modifications to the scene definition and using the modifications to update the scene definition data; and updating the rendering responsively to the step of accepting. An additional step that may be added is receiving render-view definition data, storing the render-view definition data, the step of rendering a view being performed responsive to the render-view definition data. The step of rendering may be performed asynchronously with respect to the steps of receiving. The pass definition data may include multiple partitionsxe2x80x94mutually exclusive groups of scene objects. Also, the step of rendering may include rendering the view responsively to the current pass to the exclusion of other passes.
According to still another embodiment, the invention is a method of producing a user-interface for authoring 3-D scenes. The steps include receiving scene definition changes and modifying a stored scene definition responsively to the changes, the stored scene definition containing geometric and non-geometric parameters of the 3-D scene. The steps further include receiving data indicating a selection of one of multiple pre-defined pass definitions, each defining at least one non-geometric parameter of the scene. At least some of the non-geometric parameters in the multiple pre-defined pass definitions are redundant with respect to corresponding non-geometric parameters in the stored scene definition. Further steps include displaying an abstract image of a scene responsively to the first step of receiving and finally displaying a rendered image of the scene responsively to both of the steps of receiving.
According to still another embodiment, the invention is a method of iteratively and automatically producing a rendering that may displayed in an authoring user interface each time a 3-D scene is modified through the authoring user-interface. The steps include storing pass definitions, each of which defines properties of the 3-D scene. The steps further include selecting one of the pass definitions as current responsively to changes in the 3-D scene entered by an author into the authoring user-interface. The steps further include determining properties to be used in the rendering according to one of the stored pass definitions selected in the step of selecting, such that any property of the one of the pass definitions that corresponds with a property of the 3-D scene supersedes the property of the 3-D scene. Finally the method calls for rendering the 3-D scene responsively to the superseding properties determined in the step of determining.
According to still another embodiment, the invention is a method in a computer system for producing pass-images of a 3-D scene, comprising: (1) storing a 3-D scene defining properties that determine an appearance of an image obtained by rendering the 3-D scene; (2) storing pass data sets, each defining override properties corresponding to the 3-D scene; (3) storing an indication of selected ones of the pass data sets according to which the 3-D scene is to be rendered; (4) rendering the 3-D scene, at least once with each pass set as active or current, using at least one of the override properties of the each of the pass data sets instead of at least one corresponding one of the properties of the 3-D scene, whereby at least one image or image sequence is produced for each of the pass definitions. A step of storing the at least one image or image sequence for each of the pass data sets may be added. A step of editing at least one of the at least one image for each of the data sets may also be added.
According to still another embodiment, the invention is a method in a computer system for creating. and working with pass-images of a 3-D scene. The method includes storing a 3-D scene defining properties that determine an appearance of an image obtained by rendering the 3-D scene. The method further includes storing pass data sets, each defining override properties corresponding to the 3-D scene. The method still further includes selecting one of the pass data sets and rendering, for the selected pass data set, such that at least one of the override properties of each of the pass data sets determines a rendered image resulting therefrom instead of the corresponding property of the 3-D scene. Finally, the method calls for editing the 3-D scene while displaying the rendered image. The method may include rendering the 3-D scene for at least two of the stored pass data sets and compositing pass images resulting therefrom. The method may also include editing the pass images prior to the step of compositing. The method may also include editing the 3-D scene and repeating the step of rendering with identical pass data sets.
According to still another embodiment, the invention is a method of generating a user-interface on a computer for authoring a three-dimensional scene. The method includes storing a 3-D scene in a memory and receiving edits to the 3-D scene, the edits being applied by a user and including modifications to 3-D properties of objects defined by the 3-D scene. The method also includes, substantially simultaneously with, but asynchronously with respect to, the step of receiving, generating a rendered view of the 3-D scene. The rendered view is responsive to a selected set of parameters, the set of parameters being one of a group of sets of parameters relating to the 3-D scene. Finally, the steps of receiving and generating are repeated. The rendered view may be displayed substantially simultaneously with the step of receiving. The sets of parameters may include parameters that replace parameters in the 3-D scene such that the rendered view is determined by parameters of the selected set of parameters rather than by replaced parameters of the 3-D scene.