Electronic integrated circuits are used in most modern electronic products. An integrated circuit consists of a large number of electronic circuits combined into a single package, commonly referred to as a "chip." Where these chips are relatively large they are referred to as very large scale integrated circuits ("VLSI"). Integrated circuits also may be combined to form multi-chip modules. A multi-chip module ("MCM") consists of several chips that are connected together by connection paths imbedded in the module.
The design of an integrated circuit involves defining the circuits to be included in the chip, designing the physical components within the chip that correspond to those circuits, providing signal paths to interconnect the components and designing passive circuit elements (such as inductors) that may be needed in the chip.
The characteristics of the passive components and the signal paths can have a significant impact on the operation of the chip. For example, the resistance, inductance and capacitance of these components will affect the signals that pass through them. Consequently. passive components in devices such as high-speed VLSI chips, printed circuit boards and multi-chip modules need to be accurately designed and analyzed to ensure that these devices will operate properly and reliably. Moreover, due to the relatively high cost of fabricating integrated circuits, these components should be rigorously characterized to ensure that the devices operate properly the first time they are fabricated.
Typically, the characteristics of the passive components depend on the attributes of the components. For example, the inductance of a signal path may depend on its width, shape and proximity to other components. Consequently, the process of calculating these characteristics, referred to in the art as parameter extraction, involves applying the appropriate mathematical algorithms to the attributes of the components.
There are two conventional methods of extracting parameters of passive components in integrated circuits. One method involves discretizing the region of interest on the component using either a finite difference scheme or a finite element scheme. Discretizing a region of interest involves subdividing the region into a set of contiguous sampling points. This method produces a system of linear equations which is solved to obtain the corresponding parameter.
The other method uses an integral equation formulation of the problem. Integral equation formulations have several advantages over the other method. For example, integral equation methods can often treat arbitrary regions more effectively than the other method. In addition, integral equation methods generally have better conditioning. In other words, the resulting system of equations may be solved more efficiently because the solution to the system of equations converges more rapidly using this method than it does using the other method. Furthermore, the dimensionality of the resulting system of equations may be smaller because comparable accuracy can be obtained using fewer sampling points on the component. Conventionally, finite element methods discretize sample points throughout the volume of the component. In contrast, integral equation methods typically only discretize the surface of the component.
Integral equation algorithms using the Method of Moments have been effectively used in the extraction of passive elements in the modeling of integrated circuits and multi-chip module packaging. The Method of Moments technique is discussed in the article "Preconditioned, Adaptive, Multipole-Accelerated Iterative Methods for Three Dimensional First-Kind Integral Equations of Potential Theory", by K. Nabors, et al., SIAM J. Sci. Comput., Vol. 15(3), pp 713-735, May, 1994; and in the article "Rapid Solution of Integral Equations of Scattering Theory in Two Dimensions", by V. Rokhlin, Journal of Computational Physics, 86(2), pp 414-439, February, 1990, both of which are incorporated herein by reference.
Parameter extraction using integral equation methods, such as the Method of Moments, typically involves solving a relatively dense system of linear equations. Conventionally, systems of equations are represented in matrix form. For example, EQUATION 1 illustrates a very simple system of equations that defines a 2-by-2 matrix. The values for A.sub.1, B.sub.1, C.sub.1, A.sub.2, B.sub.2 and C.sub.2 would be known. Solving the matrix involves calculating the values of the variables x.sub.1 and x.sub.2. Various techniques for solving matrices are well known in the linear systems art. EQU A.sub.1 x.sub.1 +B.sub.1 x.sub.2 =C.sub.1 EQUATION 1 EQU A.sub.2 x.sub.1 +B.sub.2 x.sub.2 =C.sub.2
The size of the matrix generated by the integral equation methods depends on the number of sampling points defined by the discretization process. In some applications, matrices having a size on the order of 1000-by-1000 are common. Many conventional factorization methods cannot efficiently solve matrices of this size.
A number of algorithms have been developed for efficiently solving the dense matrices that may be generated by integral equation methods. For example, the particle simulation algorithms and the capacitance and inductance extraction algorithms are well known. These algorithms are treated in the articles: "A Fast Algorithm for Particle Simulations", by L. Greengard and V. Rokhlin, Journal of Computational Physics, 72(2), pp 325-348, December, 1987; "Fasthenry: A Multipole Accelerated 3-D Inductance Extraction Program", by M. Kamon, et al., IEEE Transactions on Microwave Theory and Techniques, Vol. 42(9), pp 1750-58, 1994; "Fast Capacitance Extraction of General Three-Dirnensional Structures", by K. Nabors and J. White, IEEE Transactions on Microwave Theory and Techniques, 1992, all of which are incorporated herein by reference.
These algorithms reduce the computation time needed to solve a matrix by exploiting the special structure of the problem. Specifically, they combine interpolation of the function that defines the matrix elements with a divide-and-conquer strategy. The resulting algorithms take O(n) or O(n log n) time to solve, where n is the dimension of the matrix. In other words, the number of operations performed by these algorithms is proportional to n or "n times log(n)."
In general, the above algorithms are custom designed for specific integral equation kernels. For example, in the Greengard and Nabors articles discussed above, the matrix kernel is of the form 1.vertline.x-x'.vertline. which is the free-space Greens function for the Laplace equation. These matrix-implicit schemes generally are not applicable when the matrix kernel of the integral equation is either not available analytically or does not conform to a simple analytic form. Such situations are frequently encountered in Method of Moments solution techniques for electromagnetic simulation.
The inability of the matrix-implicit fast algorithms to adapt to an arbitrary integral equation kernel has been partially addressed by the matrix-explicit fast wavelet algorithms. Wavelets permit representation of a variety of functions and operators with relatively little redundancy. Through their ability to represent local, high-frequency information with localized basis elements, wavelets allow adaptation in a straightforward and consistent fashion. However, wavelet based schemes suffer from the disadvantage that the sparse representation of the matrix is extremely sensitive to the choice of basis functions. Moreover, the compression is often unsatisfactory. In some sense wavelets, as they are currently applied, are too-general and cannot fully exploit the local low-rank structure of the matrices that result from integral equation kernels associated with physical problems.
Consequently, a need exists for an improved method for designing components and interconnections by extracting parameters associated with these passive components and connections in integrated circuits and other structure in other physical systems. In particular, a need exists for a parameter extractor that can use arbitrary integral equation kernels and that can exploit the low-rank structure of the matrix.