Object models are often stored in computer systems in the form of surfaces. The process of displaying the object (corresponding to the object model) generally requires rendering, which usually refers to mapping the object model onto a two dimensional surface. At least when the surfaces are curved, the surfaces are generally subdivided or decomposed into triangles in the process of rendering the images.
A cubic parametric curve is defined by the positions and tangents at the curve's end points. A Bezier curve, for example, is defined by a geometry matrix of four points (P1-P4) that are defined by the intersections of the tangent vectors at the end points of the curve. Changing the locations of the points changes the shape of the curve.
Cubic curves may be generalized to bicubic surfaces by defining cubic equations of two parameters, s and t. In other words, bicubic surfaces are defined as parametric surfaces where the (x,y,z) coordinates in a space called “world coordinates” (WC) of each point of the surface are functions of s and t. Varying both parameters from 0 to 1 defines all points on a surface patch. If one parameter is assigned a constant value and the other parameters vary from 0 to 1, the result is a cubic curve, defined by a geometry matrix P comprising 16 control points (FIG. 4).
While the parameters s and t describe a closed unidimensional interval (typically the interval [0,1]) the points (x,y,z) describe the surface:x=f(s,t), y=g(s,t), z=h(s,t) sε[0,1], tε[0,1],where ε represents an interval between the two coordinates in the parenthesis.
The space determined by s and t, the bidimensional interval [0,1]×[0,1] is called “parameter coordinates” (PC). Textures described in a space called “texture coordinates” (TC) that can be two or even three dimensional are described by sets of points of two ((u,v)) or three coordinates ((u,v,q)). The process of attaching a texture to a surface is called “texture-object association” and consists of associating u, v and q with the parameters s and t via some function:u=a(s,t) v=b(s,t) (and q=c(s,t))
Textures can be used in order to apply both color to the objects and also to make the surfaces of the objects to appear rough. In the latter case, when the textures perturb the points on the surface they are called “displacement maps” and when the textures are used to perturb the orientation of the normals to the surface they are called “bump maps”. We will show how the present invention applies to both displacement and bump maps.
FIGS. 1A and 1B are diagrams illustrating a process for rendering bicubic surfaces. As shown in FIG. 1A, the principle used for rendering such a curved surface 10 is to subdivide it into smaller four sided surfaces or tiles 12 by subdividing the intervals that define the parameters s and t. The subdivision continues until the surfaces resulting from subdivision have a curvature, measured in WC space, that is below a predetermined threshold. The subdivision of the intervals defining s and t produces a set of numbers {si} i=1,n and {tj} j=1,m that determine a subdivision of the PC. This subdivision induces a subdivision of the TC, for each pair (si,tj) we obtain a pair (ui,j,vi,j) (or a triplet (ui,j,vi,j,qi,j)). Here ui,j=a(si,tj), vi,j=b(si,tj), qi,j=c(si,tj). For each pair (si,tj) we also obtain a point (called “vertex”) in WC, Vi,j (x(si,tj),y(si,tj),z(si,tj)).
The contents of a texture map at location (ui,j,vi,j) are color and transparency. The contents of a bump map at a location (mi,j=m(si,tj), ni,j=n(si,tj)) are the components of a three dimensional vector dNi,j used for perturbing the normal Ni,j to the point Vi,j (x(si,tj),y(si,tj),z(si,tj)): N′l,j=Ni,j+dNi,j.
The contents of a displacement map at a location (ki,j=k(si,tj), li,j=l(si,tj)) are the components of a three dimensional point (dxi,j, dyi,j, dzi,j) used for perturbing the coordinates of the the point Vi,j (x(si,tj),y(si,tj),z(si,tj)):V′l,j(x(si,tj),y(si,tj),z(si,tj))=Vi,j(x(si,tj),y(si,tj), z(si,tj)+(dxi,j, dyi,j dzi,j)*Ni,j
This process is executed off-line because the subdivision of the surfaces and the measurement of the resulting curvature are very time consuming. As shown in FIG. 1B when all resulting four sided surfaces (tiles) 12 are below a certain curvature threshold, each such resultant four-sided surface 12 is then divided into two triangles 14 (because they are easily rendered by dedicated hardware) and each triangle surface gets the normal to its surface calculated and each triangle vertex also gets its normal calculated. The normals are used later on for lighting calculations.
As shown in FIG. 2, bicubic surfaces 10A and 10B that share boundaries must share the same subdivision along the common boundary (i.e., the tile 12 boundaries match). This is due to the fact that the triangles resulting from subdivision must share the same vertices along the common surface boundary, otherwise cracks will appear between them.
The conventional process for subdividing a set of bicubic surfaces in pseudocode is as follows:                Step 1.                    For each bicubic surface            Subdivide the s interval            Subdivide the t interval            Until each resultant four sided surface is below a certain predetermined curvature range                        Step 2                    For all bicubic surfaces sharing a same parameter (either s or t) boundary            Choose as the common subdivision the reunion of the subdivisions in order to prevent cracks showing along the common boundary                        Step 3                    For each bicubic surface            For each pair (si,tj)            Calculate (ui,j v,j qi,j Vi,j)                        
Generate triangles by connecting neighboring vertices                Step 4        For each vertex Vi,j        Calculate the normal Ni,j to that vertex        For each triangle        Calculate the normal to the triangle        
The steps 1 through 4 are executed on general purpose computers and may take up to several hours to execute. The steps of rendering the set of bicubic surfaces that have been decomposed into triangles are as follows:                Step 5.                    Transform the verices Vi,j and the normals Ni,j            Transform the normals to the triangles                        Step 6.                    For each vertex Vi,j            Calculate lighting                        Step 7                    For each triangle                        Clip against the viewing viewport        Calculate lighting for the vertices produced by clipping        Step 8                    Project all the vertices Vi,j into screen coordinates (SC)                        Step 9                    Render all the triangles produced after clipping and projection                        
Steps 5 through 9 are typically executed in real time with the assistance of specialized hardware found in 3D graphics controllers.
The conventional process for rendering bicubic surfaces has several disadvantages. For example, the process is slow because the subdivision is so computationally intensive, and is therefore often executed off-line. In addition, as the subdivision of the tiles into triangles is done off-line, the partition is fixed, it may not account for the fact that more triangles are needed when the surface is closer to the viewer versus fewer triangles being needed when the surface is farther away. The process of adaptively subdividing a surface as a function of distance is called “automatic level of detail”.
Furthermore, each vertex or triangle plane normal needs to be transformed when the surface is transformed in response to a change of view of the surface, a computationally intensive process that may need dedicated hardware. Also, there is no accounting for the fact that the surfaces are actually rendered in a space called “screen coordinates” (SC) after a process called “projection” which distorts such surfaces to the point where we need to take into consideration the curvature in SC, not in WC.
Because the steps required for surface subdivision are so slow and limited, a method is needed for rendering a curved surface that minimizes the number of required computations, such that the images can potentially be rendered in real-time (as opposed to off-line). The present invention addresses such a need.