Physical fields can often be simulated over a user-defined domain—e.g., a surface or volume specified geometrically—by discretizing the domain (i.e., dividing it into discrete elements), and modeling the field-governing equations and applicable boundary conditions with a system matrix equation, i.e., a (typically large) linear system of equations that describe the behavior of the field within the discretized domain. For example, electromagnetic fields can be simulated using a discretized formulation of Maxwell's equations, and temperature fields can be simulated using a discretized heat equation. To set up the matrix equation, the finite element method (FEM) is widely used because of its ability to model complex heterogeneous and anisotropic materials and to represent geometrically complicated domains using, for example, tetrahedral elements. FEM is a numerical technique for finding approximate solutions of partial differential equations or integral equations (such as, e.g., Maxwell's equations), i.e., a technique that enables problems lacking exact mathematical (“analytical”) solutions to be approximated computationally (“numerically”).
In brief, FEM typically involves representing a surface or spatial volume as many small component elements. This discretization may be accomplished by defining a mesh grid (such as, e.g., a triangular, tetrahedral, or other polygonal mesh) over the domain. The physical fields (e.g., the components of electric and magnetic fields) may then be expressed in a form suitable to the discretized domain. For example, fields may be represented in a finite-dimensional function space of piecewise polynomial functions (e.g., piecewise linear functions) that can be described as linear combinations of basis functions, or “finite elements,” whose “support”—defined as the portion of the domain where the basis functions are non-zero includes only a small number of adjacent meshes. The boundary value problem that describes the behavior of the fields in the domain (i.e., the field-governing equations and boundary conditions) is typically rephrased in its weak, or variational, form before discretization.
FEM results in a system matrix equation which may then be solved with a direct or iterative solver, depending on the size and characteristics of the linear system. (A “solver,” as the term is used herein, denotes a method for solving a system of equations, or a computer program implementing such a method, as determined by context.) For large three-dimensional problems, direct solvers potentially require prohibitive amounts of memory and suffer poor parallel scalability. Therefore, iterative solvers typically present the only practical means for solving large systems. In iterative methods, the problem is approached in successive steps, with each step refining a previous approximation to more closely approach the exact solution.
A powerful technique to facilitate parallel solution of large problems is the domain decomposition method (DDM). In this method, the original domain of the problem is decomposed into several (typically non-overlapping, and possibly repetitive) subdomains; for example, a cuboid spatial domain may be divided into a series of smaller adjacent cubes. The resulting system matrix has then typically block form, where each diagonal block (i.e., submatrix) corresponds to one of the subdomains, and off-diagonal blocks represent coupling between the domains. The continuity of fields at the interfaces between adjacent subdomains is enforced through suitable boundary conditions (also referred to as transmission conditions), which are preferably chosen so as to avoid mathematical complication (e.g., so that modeling of each subdomain involves a “well-posed” problem having an unambiguous solution, and such that convergence occurs rapidly enough to be computationally tractable).
In order to increase the amenability of the matrix equation to computational solution, the matrix may be “preconditioned.” Typically, preconditioning involves applying a “preconditioner”—a matrix that reduces the “condition number” of the problem—to the system matrix. The condition number is a metric of the propagation of approximation errors during numerical solution, and, consequently, of the accuracy of the approximated solution. Smaller condition numbers are associated with higher accuracy, and therefore typically with a higher rate of convergence toward the solution. Thus, application of a preconditioner tends to reduce the number of necessary iterations. In domain decomposition methods, frequently used preconditioners include Jacobi preconditioners and Gauss-Seidel preconditioners. Jacobi preconditioners facilitate parallelization because, during the iterative solution, each block can be updated independently. Gauss-Seidel preconditioners are not as easy to parallelize, but are nonetheless attractive because they can converge with fewer iterations than Jacobi preconditioners, particularly if the subdomains are numbered in a way that mimics the propagation of fields through the problem domain. (The subdomains are preferably numbered starting with the subdomain containing the excitation. This subdomain is surrounded by a collection of neighboring subdomains, which are numbered next. For each neighbor, its neighbors are then numbered. This process continues until all subdomains are numbered.) Jacobi preconditioners disregard the ordering of subdomains (i.e., the subdomain numbering based upon the relative geometric arrangements of subdomains). Thus the convergence rate of Jacobi preconditioners is independent of the subdomain ordering, and therefore, cannot be improved by optimal ordering. Gauss-Seidel preconditioners generally respect the ordering of domains, and are more effective because they include all coupling terms between subdomains, resulting in a smaller condition number and, hence, faster convergence. Due to the coupling terms, however, they are also inherently sequential, resulting in degraded parallelizability. Accordingly, an alternative preconditioner suitable for domain-decomposition formulations is desirable.