Many computational methods operate over a region of n-dimensional space. For instance, a solid object may be modeled as a digital representation. Finite element analysis is one of many computational applications to which such a model is put.
It has been found that such computational methods are facilitated if the digital representation of the object is made in terms of a set or collection of simple geometric elements. Depending on the number, size, and shape of the elements, and on their configuration relative to each other, an arbitrarily shaped object may be modeled to a desired degree of accuracy.
It has also been found preferable that the elements making up the representation of the object be taken from a set of known elements having predefined simple topologies. For modeling three-dimensional objects, the simplest element is the tetrahedron. Other simple three-dimensional elements include the pentahedron and the hexahedron.
The process of preparing a digital model of an object for use in computations includes decomposing the object into a collection of elements from the set of known elements. It is preferable that the collection of elements fit the object as naturally as possible. The elements fit the object naturally if the boundaries between elements of the collection tend to follow edges of the object, and the sizes of the elements tend to be greater along straight, or otherwise continuous, portions of the object.
It is desirable that the collection of elements fit the object naturally because the dependence of the computational solution on a given subdivision is reduced. The Finite Element method is a computational method commonly used with such element representations of objects. A finite element computation is essentially an interpolation of a function between endpoints made up of the vertices or edges of one of the elements. There are two types of finite element computations, called h-type and p-type.
In h-type finite element analysis, the function is evaluated for a given polynomial order. As a consequence, there is an inherent limitation as to how closely the interpolation can follow the changes in value of the function between two given endpoints. Thus, the accuracy of the interpolated result is inversely related to the distance between the endpoints. Therefore, in an object having more or less intricate sections, where the function is likely to vary more in value in the more intricate sections, a natural subdivision is preferable because the more accurate interpolations are provided where the function is more likely to vary.
In p-type finite element analysis, the accuracy of the interpolated result, for a given element, is related to the polynomial order of the function. Therefore, in p-type finite element analysis, the number of elements of the collection need not be as great as in h-type. Again, a natural subdivision is preferable because a natural subdivision is likely to keep to a minimum the total number of elements, which is desirable to reduce the number of p-type computations.
Therefore, the problem of how to attain a natural subdivision, preferably minimizing the total number of elements, represents an important challenge to those who seek to produce digital representations of objects. One conventional method for producing a natural subdivision is taught in U.S. Pat. No. 4,797,842, issued to Nackman et al. and titled "Method of Generating Finite Elements Using the Symmetric Axis Transform." The Nackman method involves a two-step process in which, first, an object is divided into coarse subdomains, and then the subdomains are further subdivided to produce the final collection of elements. The first step of coarse subdivision is done by using a symmetric axis transform which generates axes symmetric to opposing boundaries of the object, and a radius function of the axes giving the distance between the axes and the boundaries. A coarse subdomain is created between each single non-branching section of the axes and one of the associated boundaries of the object. Then, the coarse subdomains are further subdivided into the elements.
This method is straightforward in two dimensions, because the coarse subdomains are usually in the form of quadrilaterals. In three dimensions, a two-dimensional subdivision is performed on a surface of the three-dimensional coarse subdomains by either mapping points of the surface (which is likely to be warped) to a flat surface or defining geodesics among the warped surface, and then performing the two-dimensional symmetric axis transform using the mapped flat surface or the geodesics to define the symmetric axis. Then, the three-dimensional coarse subdomain is further subdivided, through the use of a further subdivision of the surface in a manner analogous to the further subdivision performed for a two-dimensional object.
However, with arbitrarily shaped three-dimensional solids, the Nackman method produces undesirably large numbers of elements which do not necessarily follow the natural contours of the object in the simplest possible manner. Therefore, the Nackman method is not fully satisfactory.
Other conventional schemes decompose an object by imposing an artificial signature onto the subdivision. That is, they impose a predetermined decomposition pattern on the object.
For instance, in Kela el al., "A Hierarchical Structure for Automatic Meshing and Adaptive FEM Analysis", Production Automation Project, College of Engineering and Applied Science, Univ. of Rochester, published in Engineering Computations, Nov. 1986, there is disclosed a decomposition method including "boxing" the object to define a convenient minimal spatial region, and decomposing the box into quadrants. Each one of the quadrants, which are subdomains of the object, is tested to determine whether it is either wholly inside or wholly outside the object. A subdomain for which neither of these conditions are true is recursively subdivided until one of the conditions is met. A data structure is defined, using a predetermined numbering convention based on the recursive subdivisions, so that a given subdomain is uniquely identified by its number. Thus, a given subdomain may readily be accessed in a data structure through the use of an array index based on its number, rather than by a conventional pointer following method through a tree structure.
While the Kela method has the advantage of speed, because of the recursive quadrant-subdividing scheme, the method imposes the quadrant pattern on the object as the artificial signature. Thus, the resultant collection of elements does not follow the natural shape of the object. Accordingly, this and similar methods also fail to achieve the desired characteristics of a decomposition scheme.