A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present invention relates generally to computer graphics, and more specifically, to using hardware devices to generate modified geometry objects based on instructions provided by extension objects.
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
Computer generated three dimensional (3D) modeling and animation enrich a wide range of human experiences that include everything from captivating audiences at movie theaters, to gluing gamers to their video games, to embarking home buyers on virtual tours of new homes. To generate 3D models and/or animations, a 3D designer interacts with a 3D modeling program, such 3D Studio Max(trademark), which is commercially available from Autodesk, Inc., to define geometry objects for importing into a computer graphic application, such as a game engine. As used herein, the term xe2x80x9cgeometry objectxe2x80x9d is an object in a graphics application that is comprised of geometrical features that can be manipulated by the graphics application.
As part of creating a geometry object, the designer typically defines a base object, for example a sphere or box, and then applies one or more modifiers to the base object to create a geometry object that can then be exported into a graphic application. As used herein, the term xe2x80x9cbase objectxe2x80x9d is the first component in a series of components that are used to define and modify a geometry object.
For example, to create an object, a user, such as an animator, may interact with a conventional modeling program to define a base object by selecting a particular object type from a set of predefined object types and selecting a set of parameter values that are to be used to define the specific parameters of the base object. Next, using the modeling program, the user may define one or more modifiers or other types of components that are to be applied to the base object for modifying certain characteristics, properties, attributes, constraints, and other parameters of the base object. Thereafter, once the user is satisfied with the object that is generated based on the selected base object and modifiers, the object can then be exported for use in a graphics application. As used herein, a component defines one or more operations in the designing of a geometry object. Components may include, but are not limited to, base components that are used as the starting point in a sequence of components and modifier components that are included in the sequence of components and that modify base components.
As another example, FIG. 1A, FIG. 1B, and FIG. 1C depict a conventional modeling program interface 100 that can be used to generate an object that includes a set of desired characteristics, properties, attributes, constraints, and other parameters. As depicted in window 108 of FIG. 1A, a user may interact with modeling program interface 100 to create a base object 110 by selecting a particular type of object (for example a sphere object) from a creation panel (not shown). Once the object is created, the parameters that are associated with base object 110 can be edited using either the creation panel or through an object parameter menu 104. For example, a sequential ordering of components in the form of a stack may be used to create and modify the geometry object. In the example depicted in FIG. 1A, the components are modifiers that are organized into a modifier stack 105. A modifier stack window 106 provides a visual representation of modifier stack 105 that depicts the base object 110 and any modifiers that have been selected for modifying the base object 110.
Conventionally, the stack provides a sequential hierarchical order for applying the components in the stack to a base component. In some instances, the stack is described as being xe2x80x9cevaluatedxe2x80x9d and each component in the stack is said to be xe2x80x9cevaluated,xe2x80x9d meaning that the parameters associated with each component are used to define one or more actions to be taken with respect to the base component or a subsequent version of the base component, such as making modifications to the base object.
As used herein, the terminology of xe2x80x9capplying a componentxe2x80x9d and xe2x80x9cevaluating a componentxe2x80x9d are synonymous. Also, the term xe2x80x9ccomponentxe2x80x9d includes but is not limited to modifiers that are components that alter the object. For example, components may include a base component that is the starting point for defining a geometry object in a stack or a display component that provides a representation of the object, such as by presenting a visual representation of the object to a user on a display device.
Once a base component is defined, the user may apply one or more components to modify the characteristics, properties, attributes, constraints, or other parameters of the base component. For example, in FIG. 1B, the user may select a bend modifier button 112 and enter bend parameter data in a bend parameter menu 114 to define a bend modifier for applying to base object 110. Because base object 110 has the form of a sphere, base object 110 may be referred to as a sphere object. In response to the user defining the bend modifier, the bend modifier is inserted into modifier stack 105 in modifier stack window 106. As a result of applying the bend modifier to base object 110, a sphere/bend object 116 is created as depicted in window 108 of FIG. 1B.
After applying the bend modifier, the user may apply additional modifiers to modify the characteristics, properties, attributes, constraints, or other parameters of sphere/bend object 116. For example, in FIG. 1C, the user may select a taper modifier button 118 and enter taper parameter data in a taper parameter menu 120 to define a taper modifier for applying to the sphere/bend object 116 to create a sphere/bend/taper object 122 as depicted in window 108 of FIG. 1C. In response to the user defining the taper modifier, the taper modifier is added to modifier stack 105 in modifier stack window 106 of FIG. 1C.
FIG. 1D depicts a conventional modifier stack 150 (as presented to the user as modifier stack 105 in modifier stack window 106 of FIG. 1C) that is used to render sphere/bend/taper object 122 in FIG. 1C. In this example, modifier stack 150 includes sphere object data 152, bend modifier data 154, taper modifier data 156 and a node world-space cache (wscache) data 158. Modifier stack 150 maintains a hierarchical order that is used in evaluating the components within the stack. For example, in evaluating modifier stack 150, the lower-ordered bend modifier data 154 is applied or evaluated prior to the higher-ordered taper modifier data 156. Note that if the order of bend modifier data 154 and taper modifier data 156 were switched, the resulting sphere/taper/bend object would likely have at least a somewhat different appearance than sphere/bend/taper object 122.
In the example depicted in FIG. 1D, sphere object data 152 describes the base object selected by the user. Bend modifier data 154 and taper modifier data 156 describe the modifications that are to be respectively applied as the object is passed-up the modifier stack 150. Node wscache data 158 represents the cached result of evaluating modifier stack 150 in world space coordinates instead of object space coordinates.
In evaluating modifier stack 150, a geometry type is selected for rendering the particular object. Assume for the example of FIG. 1D that a geometry type of mesh is selected for rendering the object when sphere object data 152 is defined. To render the object, an initial mesh object is first generated based on the characteristics, properties, attributes, constraints, and other parameters that were defined in sphere object data 152 (for example, base object 110 in FIG. 1A). Next, the mesh object is passed up the modifier stack 150 and bend modifier data 154 is applied to a copy of the initial mesh object to create an updated mesh object (for example, sphere/bend object 116). Next, the updated mesh object is passed up the modifier stack 150 and taper modifier data 156 is applied to a copy of the updated mesh object to further update the mesh object (for example, sphere/bend/taper object 122). Finally, the updated mesh object is passed up the modifier stack 150 to the node wscache data 158 that causes the object (sphere/bend/taper object 122) to be rendered in window 108 as depicted in FIG. 1C.
Using a stack for modeling geometry objects is generally referred to as non-destructive modeling in that each component in the stack is reapplied or reevaluated in their specific order whenever a change is made to an object or a component within the stack. For example, if the user redefines the dimensions of the xe2x80x9clower-orderedxe2x80x9d sphere object data 152, xe2x80x9chigher-orderedxe2x80x9d bend modifier data 154 and taper modifier data 156 are sequentially reapplied to the newly defined mesh object prior to the resulting geometry object being displayed to the user by the node wscache data 158.
Additional examples of how modifier stacks may be used to render 3D objects are provided in U.S. Pat. No. 6,061,067, entitled APPLYING MODIFIERS TO OBJECTS BASED ON THE TYPES OF THE OBJECTS; U.S. Pat. No. 6,195,098, entitled SYSTEM AND METHOD FOR INTERACTIVE RENDERING OF THREE DIMENSIONAL OBJECTS; U.S. Pat. No. 5,995,107, entitled CACHING IN A THREE DIMENSIONAL MODELING AND ANIMATION SYSTEM; U.S. Pat. No. 6,034,695 entitled THREE DIMENSIONAL MODELING AND ANIMATION SYSTEM; U.S. Pat. No. 6,184,901 entitled THREE DIMENSIONAL MODELING AND ANIMATION SYSTEM; and U.S. patent application Ser. No. 09/286,133 entitled TRANSLATING OBJECTS BETWEEN SOFTWARE APPLICATIONS WHICH EMPLOY DIFFERENT DATA FORMATS.
A drawback with using a conventional stack to render a geometry object is that certain characteristics, properties, attributes, constraints, and other parameters that were defined at a lower level in the stack no longer influence, or may not even make sense, at a higher level in the stack. For example, sphere object data 152 may include a constraint that no face on the created mesh object is to be smaller than a specified size. Thus, when creating the initial mesh object based on sphere object data 152, the constraint guarantees that the initial mesh object will be created without any faces that are smaller than the specified size.
However, once the initial mesh object is created, the size constraint that is defined by sphere object data 152 is lost and thus is no longer active. When the copy of the initial mesh object is updated based on the bend modifier data 154, the constraint information that was defined by sphere object data 152 no longer influences how the mesh object is modified. Thus, the updated mesh object that is created from applying the bend modifier data 154 may now include one or more faces that are smaller than the specified size. In order to reapply the size constraint, another modifier that applies and enforces the size constraint may be inserted into the stack. However, if many different modifiers are included in the stack, the user may have to repeatedly add such size constraint modifiers, which is inconvenient and adds to the size and complexity of the stack.
In addition, certain properties of a geometry object, such as the number of faces that are contained within a mesh representation of the geometry object, may dynamically change as the geometry object is passed up the stack. For example, attributes may be applied at a lower level to specific faces of the mesh object. If the faces are later removed and/or combined with other faces at a higher level in the modifier stack, the stack may not be able to adequately handle the applied attributes. For example, the base object data may specify that a friction value of xe2x80x9c10xe2x80x9d is to be associated with face xe2x80x9c100xe2x80x9d while a friction value of xe2x80x9c4xe2x80x9d is to be associated with face xe2x80x9c101.xe2x80x9d However, if in passing the initial mesh object up the stack a subsequent modifier causes faces xe2x80x9c100xe2x80x9d and xe2x80x9c101xe2x80x9d to be combined into a single face, the stack may not know what friction value, or even if a friction value, is to be associated with the single combined face of the updated mesh object.
A recent trend in 3D computer graphics is the use of xe2x80x9chardware shadersxe2x80x9d that use graphics hardware to perform some graphics manipulations that were previously performed by graphics software. For computationally intensive graphical operations, such as morphs and skin effects, there is a significant increase in performance when such graphical operations are performed by hardware instead of software. Examples of hardware shaders include the ATI Radeon, ATI Radeon 8500, and Leadtek Geforce3 graphics cards. Current hardware shaders are capable of performing only certain types of graphical operations, such as per vertex operations, in which the locations of the vertexes of objects are manipulated and processed for lighting calculations, and per pixel operations, in which colors are interpolated and texturing effects are applied for producing the pixels that are shown on a display.
An application program interface (API) allows users to provide instructions to the hardware shaders on how to render graphics on a display. For example, users can use Microsoft""s DirectX Graphics API, which includes Direct3D, or SGI""s OpenGL API, to provide instructions to the hardware shaders. The API specifies how users are to supply instructions to the hardware shaders, and the types of instructions that are to be supported by the hardware shaders. Essentially, the API allows for the programming of the graphics hardware.
FIG. 10 depicts a flow diagram of the operation of a conventional hardware shader. In block 1010, 3D data is received, such as the data that defines a geometry object that is received from a 3D modeling application. In block 1020, the per vertex operations are performed, such as transformation and lighting effects. In block 1030, the image is rasterized, meaning that the triangles used to represent the geometry object or objects are set up. In block 1040, per pixel operations are performed, which can include applying texturing effects. Finally, in block 1050, the image is displayed, such as by using a FrameBuffer. The API""s for the hardware shaders allow a user to not just tweak or change the parameters used in per vertex and per pixel operations, but to define the underlying equations used for such operations.
Conventionally, an end user accesses the capabilities of the API and the hardware shader by using another program, sometimes referred to as a xe2x80x9cshader tool,xe2x80x9d such as the nVidia Effects Browser. Thus, the user can in essence program the hardware shader to apply specified vertex and pixel operations to the 3D output of a graphics modeling application. However, the need to use an additional shader tool is cumbersome and inconvenient for graphics designers and may outweigh the performance improvements that would result from having the graphics hardware perform the specified graphic operations instead of the graphics modeling application.
Based on the foregoing, there is a clear need for an approach for incorporating the use of hardware devices when creating graphical models in graphics modeling applications.
An approach is described for using hardware devices to generate modified geometry objects based on instructions provided by extension objects. According to one aspect of the invention, an extension object is associated with a sequence of components, such as a modifier stack, that is used to make modifications to a geometry object. Based on the sequence of components, an initial representation of the geometry object is generated. Instructions that are based on the extension object are associated with the initial representation. A graphics device, such as a hardware shader, is used to generate a final representation of the geometry object based on the instructions and the initial representation.
According to other aspects, the instructions may include interface code that is to be executed by a set of routines, such as by an application program interface, to provide device instructions to the graphics device for generating the final representation of the geometry object. Also, the instructions may be generated based on the extension object, and the graphics device may execute the device instructions to generate the final representation of the geometry object.
The invention also encompasses a computer-readable medium, a computer data signal embodied in a carrier wave, and an apparatus configured to carry out the foregoing steps. Other features and aspects will become apparent from the following description and the appended claims.