Many seismic signal processing techniques are applied to seismic data to enhance a migrated image, including regularization to create unrecorded traces needed by many processing algorithms, coherent noise attenuation to remove energy that does not contribute to the image, and random noise attenuation to enhance coherent events both before and after imaging. Often the underlying assumptions behind many of these signal processing techniques include an assumption of stationarity: that the events are planar in nature and that their dip or frequency content does not change with position. In reality, seismic data are non-stationary; they contain events with curvature and the frequency content changes as the recording time increases. This problem is well known, and several methods to address non-stationary data do exist. These include: breaking up the problem into overlapping spatial-temporal windows that are assumed to be locally stationary followed by processing and reassembly; the use of non-stationary filters that vary with space and time; and methods like the curvelet transform that expand the data into a compressible overrepresentation. The present invention is an alternative to these methods, an alternative that allows for more flexibility in handling non-stationarity in the data.
Patch-Based Methods
The most common way to apply signal processing algorithms that assume a stationary input to data that are not stationary is (see FIG. 1) to break up the problem into a series of overlapping patches that are then assumed to be locally stationary (step 10), followed by the desired processing of each window independently (step 12), followed by reassembly of the processed patches (step 14). This approach is widely used in slope estimation (Claerbout, Earth Soundings Analysis: Processing versus Inversion, Blackwell, page 91 (1992)), interpolation (Spitz, Geophysics 56, 785-794 (1991)) and signal noise separation methods such as f-x deconvolution (Canales, SEG Expanded Abstracts 3, 525-527 (1984)) that all assume that the data are composed of one or more superimposed planar events. The benefit of the patch-based approach is efficiency in two dimensions. One drawback is relatively poor scaling to higher dimensions as the amount of overlap increases with dimensionality. Another drawback is the possibility that individual patches can produce an unexpected result that either produces visible patch boundaries in the merged result or, if the patch overlap is considerable, this problematic patch is averaged with the surrounding patches to produce errors that are difficult to track.
Non-Stationary Filtering
An alternative to the patch-based approach is to solve for filters that vary as a function of position. One example of this is the use of non-stationary prediction-error filters for either interpolation (Crawley et al., SEG Expanded Abstracts, 1999, Vol. 18, Pages 1154-1157) or signal/noise separation (Guitton, Geophysics, 2005, Vol. 70, Pages V97-V107). This spatially variable filter is estimated on the entire dataset simultaneously by solving a large inverse problem for all of the filter coefficients. Since these filters vary with position, the number of unknown filter coefficients can be larger than the number of known data points, creating an underdetermined problem that is solved by adding regularization to create a smoothly non-stationary prediction-error filter. This has the benefit that the filter varies smoothly as a function of position and does not have the problem with visible boundaries that the patch-based approach does. However, creating a nonstationary filter is nonunique, so many of the benefits of a prediction-error filter that depend on solving an overdetermined unique problem are gone, such as a guarantee of minimum phase, among other benefits. The size of the filter also scales poorly with the number of dimensions involved, making higher-dimensional filtering computationally expensive.
Another nonstationary filter often used is a local plane-wave destructor filter (Fomel, Geophysics 67, 1946-1960, (2002)) or a steering filter (Clapp, Geophysics 69, 533-546 (2004)). These filters are stable and can be used to estimate local slope. This method is efficient, but when dealing with data with multiple conflicting slopes has difficulty and can alternate rapidly between the two possible slopes. Spatially aliased data also present problems as there are multiple possible solutions to the slope estimation problem.
A more recent approach to dealing with nonstationarity is from Hale (SEG Expanded Abstracts 26, 3160-3164 (2006)). Hale efficiently computes Gaussian windows in time and space using both the separability of multidimensional Gaussians as well as recursive filtering that he uses to solve for local cross-correlations and auto-correlations of data, which he can then use to generate local prediction-error filters. This method works well, but assumes that the data are well-sampled. Hale uses this same efficient Gaussian smoothing to smooth the output of slope estimation from a structure tensor. Current uses of Gaussian filtering in time and space are limited to methods that depend on efficient cross and auto correlations, and for smoothing the output of other processes. Moving to a domain where prior information is more easily enforced, such as the frequency-wavenumber domain, is not straightforward with this approach.
The S transform (Stockwell, Mansinha and Lowe, IEEE Signal Processing, 1996, Vol. 44, Pages 998-1001; Pinnegar and Mansinha, Geophysics, 2003, Vol. 68, Pages 381-385; and U.S. Pat. No. 7,617,053 to Pinnegar et al.) uses modulated Gaussian functions for time frequency analysis, but only along in a single dimension as an alternative to a spectrogram.
Gabor filtering (Daughman, IEE Trans. On Acoustics, Speech, and Signal Processing, 1988, Vol. 36, Pages 1169-1179), used in image analysis and edge detection, uses modulated Gaussian functions in multiple spatial dimensions. These filters are typically parameterized by a rotation and dilation, making them non-separable. Since applications are limited to 2 or 3 spatial dimensions, separability and parallelization are less of an issue than with higher-dimensional seismic data. In addition, using this type of filter on data with time and spatial axes might be confusing, as the rotation would span different frequency ranges depending on the rotation.
Curvelets
Another approach to the problem of nonstationarity is to use local basis functions. The curvelet transform (U.S. Patent Application Publication U.S. 20070038691 by Candes et al) is a transform that is a partition of the frequency-wavenumber domain where the data are broken up into various sized windows in scale and angle according to a parabolic-dyadic scaling law, and then each window is returned to the time-space domain on a different grid by an inverse Fourier transform to produce an output in angle, scale, time, and space.
Curvelets have largely been used for compressive sensing, where the compressive qualities of curvelets on seismic data are used to eliminate noise or interpolate missing data. This method works provided that certain underlying assumptions are fulfilled. For the interpolation case, this is that the data are randomly sampled, producing a low-amplitude blur in both the frequency-wavenumber and curvelet domains. For a signal/noise case, an adaptive subtraction of externally modeled noise can take place in the curvelet domain. Curvelets address the problem of nonstationarity, and work when simple operations such as thresholding or subtraction take place in the curvelet domain.
Applying operators across this curvelet domain is difficult, as the grid spacing and orientation differ for each scale and angle. Additionally, the parameterization of the curvelet space is difficult, as the parameters of scale and number of angles are not intuitive. Finally, curvelets become computationally expensive in higher dimensions, where the gridding and overrepresentation of the data increase greatly.
No single method deals with nonstationarity in a way that is stable, easily parallelizable, easily scalable to higher dimensions, and easily incorporates prior knowledge. The present invention satisfies these requirements.