This invention relates to seismic exploration and, more particularly, to a method for restoring missing or null seismic traces utilizing a single, non-iterative procedure equivalent to a generalized Papoulis-Gerchberg iteration for a transform which band-limits acquired seismic data. This invention further relates to a method for extrapolating recorded seismic data, again utilizing a single, non-iterative procedure equivalent to a generalized Papoulis-Gerchberg iteration for a transform which band-limits acquired seismic data.
In land-based seismic exploration, it is common practice to deploy a large array of geophones along a line of exploration on the surface of the earth and to record the vibrations of the earth at each geophone location to obtain a collection of seismic traces or seismograms. The traces are recorded as digital samples representing the amplitude of a received seismic signal as a function of time. In its most general sense, marine seismic exploration is similar to land exploration except that the seismic source and seismic receivers are towed behind an exploration vehicle which traverses a line of exploration along the ocean surface. In both cases, since seismograms are typically obtained along a line of exploration, the recorded digital samples can be formed into x-t arrays with each sample in the array representing the amplitude of the seismic signal as a function of horizontal distance and time. When such arrays are visually reproduced, by plotting or the like, seismic sections are produced. A seismic section depicts the subsurface layering of a section of the earth and is the principal tool which the geophysicist studies to determine the nature of the earth's subsurface. Before an array of seismic samples or traces can be converted into a seismic section for interpretation by geophysicists, the array must be extensively processed to remove noise and to make reflection events discernible.
It will be appreciated by those skilled in the art of seismic exploration that practical problems sometimes result in seismic traces with no recorded data or seismic traces that contain severe local contamination. In the case of severe contamination of the seismic data, removal of the contaminated data from the seismic section is often mandated. Therefore, while the two practical problems are both different in origin and impact on the seismic data, both may lead to seismic sections which contain either gaps in the traces or entire traces which have been excluded from the section. Indeed, in many such cases one or more gaps in individual traces and/or one or more null traces among otherwise normal seismic traces can result.
For this reason, it is a common problem in seismic exploration to be confronted with the presence of either seismic traces containing no recorded data or seismic traces that clearly contain severe noise contamination. Severe noise contamination--including coherent noise--can result from numerous sources including random bursts of noise, multiple or intrabed reflections or ground roll. Under severe conditions, the coherent noise is capable of dominating the seismic data. For example, direct arrivals in acquired marine seismic data and refraction energy in acquired land seismic data can dominate parts of the prestacked seismic section.
On the other hand, a missed shot by the seismic source, the failure of a geophone, or a dead channel between the geophone and the recorder can result in a seismic section which includes traces without data. Surprisingly, the impact of a missing trace on an otherwise normal seismic section can be quite severe. A missing seismic trace acts like a gap in a coherent event and can be viewed as a spatial negative spike with sufficient amplitude to cancel out coherent events. A multichannel processing algorithm used to remove large coherent noise events from the seismic record would react to the gap as it would to a spike added to the data, thereby producing additional processing noise in the form of an extraneous impulse response.
Standard practice among geophysicists faced with seismic traces containing either no recorded data or severely contaminated recorded data has been to exclude such traces, commonly referred to as "null" traces, from the otherwise satisfactory data set. The remaining seismic data would then be processed without the excluded data. The drawback to such a procedure is that a substantial amount of processing noise would be added to the trace due to the characteristic of missing traces acting like negative spikes and proper interpretation of the seismic section would be obscured.
On occasion, the null trace is necessary for proper processing of the seismic data. Under these circumstances, it was common practice to attempt to restore the null trace by creating a new trace characterized by seismic events consistent with nearby coherent events. Such an approach focused upon combining x-t domain traces near the missing trace to create the missing trace.
Various applications of F-k spectrum analysis and filtering are well known in the art. See U.S. Pat. No. 4,218,765 issued to Kinkade, U.S. Pat. No. 4,380,059 issued to Ruehle and U.S. Pat. No. 4,594,693 issued to Pann et al. Such techniques have been applied to seismic sections in an attempt to solve the problems described above. F-k spectrum analysis and filtering is particularly useful when seismic data is contaminated by large amplitude, coherent noise which obscures geologically significant signals because the coherent noise is often concentrated in a different part of the f-k spectrum than the signal. In such cases, f-k filtering can be used to attenuate the coherent noise, thereby revealing the seismic signals for interpretation.
Under some circumstances, however, f-k filtering is not recommended as processing noise capable of interfering with prestack interpretation and other well known processing techniques may be generated. For example, the f-k filtering of seismic data to remove coherent noise does not produce satisfactory results when null traces interrupt large amplitude coherent events which are to be filtered out using an f-k filter discriminating primarily on dip. In these situations, significant processing noise appears in the form of an edge effect, i.e. a noisy trace or traces adjacent to the gap resulting from the abrupt interruption of the coherent event. In the past, common practice was to either ignore or mute out the produced processing noise.
By ignoring this type of noise, other prestack multichannel data processing algorithms may create still more noise. If it is large enough, the processing noise will be seen in the stack by disrupting primary reflections, thereby adversely affecting interpretation. By muting out the processing noise which contaminates both the missing trace and adjoining traces, a larger gap which interrupts coherent events is formed. It will follow, therefore, that other prestack multichannel algorithms operating on the data with an enlarged gap will create additional processing noise requiring the muting out of still more traces. By muting traces, however, the effective fold of the data is reduced, thereby reducing the effectiveness of stacking.
Iterative procedures for extrapolating band-limited functions have been discussed in the signal processing literature for several years. The publication entitled Athanasios Papoulis, "A New Algorithm in Spectral Analysis and Band-Limited Extrapolation", IEEE Transactions on Circuits and Systems, Vol. Cas-22, No. 9, September, 1975, pgs. 735-742 discloses an algorithm for computing the transform of a band-limited function by application of an iterative process involving the discrete Fourier series and the fast Fourier transform. Papoulis proposes an iterative extrapolation technique for determining the Fourier transform F(w) of a band limited function f(t) in terms of a finite segment g(t) of f(t). See also R. W. Gerchberg, "Super-resolution through Error Energy Reduction", Optica Acta. Vol. 21, No. 9, 1974, pgs. 709-720 and Athanasios Papoulis,, Signal Analysis, Ch. 7, 1977, pgs. 221-261.
A similar iterative procedure for data restoration was disclosed by J. F. Claerbout, "Restoration of Missing Data by Least Squares Optimization", Stanford Exploration Project, Report No. 25, October, 1980, pgs. 1-16. In Claerbout, the data space was comprised of two parts--the raw data r and values x to be placed in gaps. The data space (x,r) is mapped into a column vector with the known data r in the bottom part of the vector and the unknown part x in the top. Once in model space, a weighted quadratic form is produced by premultiplying the transform by its transpose, placing in the middle a diagonal matrix of weights W. The derivative of the quadratic form with respect to x is set to zero and dx is determined by solution of a set of simultaneous equations for dx. Matrix x is updated to x+dx and the procedure repeats in an iterative method.
R. W. Schafer et al, "Constrained Iterative Restoration Algorithms", Proceedings of the IEEE, Vol. 69, No. 4, April 1981, pgs. 432-450 describes iterative techniques for removing the effects of distortion on a signal. In removing the effects of distortion, Schafer predistorts the signal and later removes the predistortion to achieve the desired signal.