1. Field of the Invention
The present invention relates generally to seismic signal processing and, more particularly, to a method for efficient inversion of near singular geophysical signals.
2. Description of the Related Art
A common data processing activity in exploration geophysics is inverting geophysical signals. This can arise in Wiener filtering in its various forms, for example, deconvolution, cross-equalization, adaptive filtering and echo cancellation. Such signals are characterized by a convolutional equation system and inverting the signals are all variations on the convolutional model least squares problem. These problems are discussed in Treitel, S. and Lines, L. R., "Linear Inverse Theory and Deconvolution", Geophysics, vol. 47, pp. 1153-1159, 1982 and in Lines, L. R. and Treitel, S., "Tutorial: A Review of Least-Squares Inversion and its Applications to Geophysical Problems", Geophysical Prospecting, Vol. 32, pp.159-186, 1984. For geophysical signals, the corresponding systems are often near singular (ill-conditioned) or rank-deficient. Classic methods for inverting such systems require stabilization methods such as adding white random noise, using a .gamma. damping factor, spectral shaping or constraint, or other so-called regularization methods. For the convolutional model, this is usually done in the normal equation form using Levinson recursion on the Toeplitz autocorrelation matrix, as will be explained below.
In this class of problems, one is given an input signal i(t), an output signal o(t) and a finite impulse response filter u(t) having the relationship EQU i(t)Xu(t)=o(t). (1)
Here the symbol X indicates convolution and the vectors i, u and o are each functions of time t. It is typically desired to find the particular solution filter or wavelet u(t) which gives the minimum error e(t) in the least squares sense. The solution to the least squares minimization problem is found by first defining the error vector by EQU e(t)=i(t)Xu(t)-o(t), (2)
and then minimizing the mean squared scalar error .epsilon. defined by the inner product of the error vector e from Eq. (2) with itself EQU .epsilon.=e.sup.H .multidot.e. (3)
Here the superscript .sup.H indicates the Hermetian transpose operator or complex conjugate, and the dot .multidot. indicates the vector inner product.
For a filter vector u(t) of length n and an output vector o(t) of length m in Eq. (1), the input vector i(t) will be of length m+n+1. Convolution of the input vector i with filter vector u may be represented by multiplication of u by an m.times.n matrix T, where T is the Toeplitz matrix defined by EQU T(t,.tau.)=i(n-.tau.+t-1) (4)
for t=0, . . . , m+n-2 and .tau.=0, . . . , n-1. FIG. 1 shows the matrix T. Eq. (4) allows Eq. (1) to be rewritten in matrix form as EQU T.multidot.u=o (5)
Eq. (5) is known as the direct form equations of the least squares problem. In geophysical signal processing, the number of data observations, represented by rows m, is usually larger than the number of physical parameters, represented by columns n. Thus the direct form equations, Eq. (5), are typically a system of m overdetermined equations.
The least squares solution u may be found from the normal form equations, an exactly determined Toeplitz n.times.n system EQU (T.sup.H .multidot.T).multidot.u=T.sup.H o. (6)
Eq. (6) is Hermetian Toeplitz if i(j)=0 for j=0, . . . , n-2 and j=m, . . . , m+n+2. This is shown in FIG. 2, which indicates a column circular matrix. FIG. 1 then represents the transient-free case, while FIG. 2 represents the more typical fully-windowed case.
Eq. (6) is typically simply written as EQU A.multidot.u=c (7)
where A=T.sup.H .multidot.T is the n.times.n autocorrelation or Gram matrix and c=T.sup.H .multidot.o is the n.times.1 cross-correlation vector. The typical method for solving Eq. (7) has been Levinson recursion, which will solve an n.times.n Toeplitz system proportional to n.sup.2 multiply-add steps, that is, has order O(n.sup.2) computational complexity. The computations of matrix A and vector c will then add another (m+1).multidot.n steps to these calculations.
The convolutional model least squares problems encountered in geophysical signal processing are often ill-conditioned. A measure of the condition of a matrix T is the condition number cond(T). The condition number of a matrix T is defined as the ratio of the largest to the smallest singular values .sigma.(i) in the singular value decomposition of T or the square root of the ratio of the largest to the smallest eigenvalues .lambda.(i) in the eigenvalue decomposition of the corresponding Gram matrix A. Thus EQU cond(T.sup.H .multidot.T)=cond(T).sup.2 =cond(A)=(.sigma..sub.max /.sigma..sub.min).sup.2 =.lambda..sub.max /.lambda..sub.min(8)
where
.sigma..sub.max =maximum singular value of T, PA1 .sigma..sub.min =minimum singular value of T, PA1 .lambda..sub.max =maximum eigenvalue of A, and PA1 .lambda..sub.min =minimum eigenvalue of A.
Thus the normal form equations given in Eqs. (6) and (7) have a condition number which is the square of the condition number of the direct form equations of Eq. (5).
Matrices with larger condition numbers are more difficult to accurately invert. This suggests that it would be advantageous to solve the direct form of the least squares problem in Eq. (5) without resorting to conversion to the less well-conditioned normal form of Eqs. (6) and (7). This is usually done by using any number of matrix decomposition techniques, all of which are O(m.multidot.n.sup.2) in computational complexity. The advantage to the normal form equation approach for Toeplitz systems is reduced computations at the expense of numerical stability.
The matrix A is defined as singular if .lambda..sub.min =0, and as nearly singular or ill-conditioned if .vertline..lambda..sub.min .vertline.&lt;.delta., for an appropriately selected small .delta.. The eigenvalues of Gram matrices are non-negative. For a practical lower limit, one may choose .delta.=n.multidot..lambda..sub.max .multidot.eps, where eps is the limit of relative floating point accuracy on the computer used. For IEEE standard 754 double precision floating point arithmetic, eps=2.sup.-52 .apprxeq.2x.multidot.10.sup.-16. The maximum conditioning number defining ill-conditioning, c.sub.max, may be estimated by substituting .vertline..lambda..sub.min .vertline.&lt;.delta. into Eq. (8), yielding c.sub.max =1/(n.multidot.eps). Then a nearly singular or ill-conditioned system is defined by cond(A)&gt;c.sub.max.
For a well-conditioned matrix A, a least squares solution u to Eq. (7) is typically found with a matrix decomposition technique such as QR or LU matrix decomposition. The ill-conditioned matrix case, where cond(A)&gt;c.sub.max, may be solved using singular value decomposition. The rank-deficient case may also be solved by the singular value decomposition approach. Another approach for an ill-conditioned matrix A is to use a small damping factor satisfying .gamma.&lt;&lt;.vertline..lambda..sub.max .vertline. and .delta.&lt;.gamma.&lt;.vertline..lambda..sub.min .vertline.. This leads to solving the new system EQU (A+.gamma..multidot.I).multidot.u=c, (9)
where I is the identity matrix. The singular value decomposition is the stablest approach, but also the slowest approach.
In geophysical applications, the spectral content of the input signal i(t) is often characterized by regions of low amplitude caused by natural shaping, filtering, and noise or reverberation. Such spectral gaps or notches make spectral inversion very difficult without resorting to stabilization methods such as adding white random noise, using a .gamma. damping factor as in Eq. (9), spectral shaping or constraint, or other so-called regularization methods. The solutions, wavelets, obtained by traditional methods are often ill-behaved and as well may not give useful results. Whereas using methods such as singular value decomposition lead to excellent results, they are generally too computationally expensive to be used in most situations.
U.S. patent application with Ser. No. 08/980,743, filed by a co-inventor of the present invention on Dec. 1, 1997, assigned to the assignee of the present invention, and entitled "Approximate Solution of Dense Linear Systems", describes a method of spectral pruning to solve linear systems representing physical systems. However, that application does not take advantage of using the corresponding Toeplitz matrix or solving the corresponding normal form equations to deal with ill-conditioned systems.