The invention relates to geophysical surveying, and in particular to seismic surveying, and more particularly still to methods and apparatus for processing a seismic trace from a subterranean region.
Seismic techniques are frequently used for geophysical surveying with a view to characterizing structures in the subterranean strata. For example, such techniques are sometimes used to seek to identify the existence, location and extent of potential hydrocarbon-bearing reservoirs in subterranean rock strata.
In simple terms a seismic survey involves directing sound waves into the Earth, e.g. using an air gun array, and recording the sound energy that is reflected back to the Earth's surface as a function of time using a geophone (or hydrophone). The timings and strengths of features in the recorded sound energy can be used to calculate the locations and characteristics of subterranean structures/interfaces giving rise to sound reflection within the Earth. Accordingly, one fundamental output from a seismic survey is a so-called seismic trace which provides a representation of recorded sound energy as a function of time following an air gun induced event. There are many well understood techniques for obtaining seismic traces and these are not described here in the interest of brevity.
As is well known, an aspect of seismic surveying which impacts the ability to resolve subterranean structure is the frequency content of the recorded seismic data. The frequency content of seismic data is a function of many things, for example, the frequency content of the originating trigger event (e.g. air gun array), the frequency-dependent attenuation within the Earth (so-called Q-filtering), the frequency response of the recording system, and the frequency characteristics of any pre-processing of the data, such as various noise-reducing filtering methods. It is generally the case that seismic data having a frequency content associated with a relatively high frequency bandwidth will allow for improved stratigraphic and lithographic interpretation as compared to seismic data with relatively low frequency content.
A simple but widely used description of a seismic trace is that it consists of a reflection series convolved with a seismic wavelet plus data noise (i.e. d=w*r+n, where “d” is the seismic trace, “w” is the seismic wavelet, “r” is a reflection series that represents the reflection from different layers in the subterranean region, “n” is noise and “*” is the convolution operator symbol). Given this description of the trace, various studies (e.g. Kallweit and Wood, 1982 [1]) have related the degree to which nearby reflection events in seismic data can be resolved to the time-duration of the wavelet and its equivalent wavelet frequency content. Such studies show that the resolving power can increase if the wavelet bandwidth and/or its upper cut-off frequency (according to different studies) increases, if the wavelet spectrum is flat from lower to upper cut-off frequencies and if the wavelet is zero phase.
Standard seismic data processing such as deconvolution, spectral whitening or inverse Q-filtering can be applied to recorded data with the objective of increasing the data resolving power. A starting point for these or equivalent techniques is modeling or estimation of the wavelet amplitude and (possibly) the wavelet phase spectrum. In the case of spectral whitening this is via explicit estimation of the amplitude spectrum; in deconvolution the amplitude is implicit in the autocorrelation input and phase is estimated via the minimum phase assumption; in inverse Q-filtering an assumed physical model is used to describe the earth filtering. A general assumption often made in statistical processes is that the amplitude spectrum of the seismic reflectivity is equal at all frequencies—i.e. the reflectivity is assumed to have a “white”, or flat, amplitude spectrum—so that data derived estimates of the amplitude spectrum are equal to the amplitude spectrum of the seismic wavelet.
In each case a subsequent step in the process is the design of an inverse operator that aims to flatten the wavelet amplitude spectrum, in addition to possibly treating the wavelet phase spectrum. The general term used to describe this step is “wavelet inverse processing”. After such processing, the degree to which nearby events can be resolved is then characterized by the time duration of the processed version of the wavelet and its frequency content, for example according to the criteria given by Kallweit and Wood, 1982 [1].
If there were zero data noise, it would in principle be possible to flatten the wavelet amplitude spectrum over any desired frequency range and hence fully resolve the seismic reflectivity. In the presence of the inevitable data noise, however, the frequency range over which it is possible to flatten the wavelet amplitude spectrum is restricted by the noise. This is because at frequencies where the wavelet amplitude is near to the noise level applying relatively high amplification to seek to flatten the wavelet amplitude spectrum will result in the noise also being amplified.
The level and type of noise present in recorded seismic data may vary considerably from dataset to dataset, depending primarily on acquisition conditions; land seismic data generally contains a wider variety and greater levels of noise than marine seismic data. An approximate rule of thumb is that the dynamic range of unprocessed seismic data, defined as the range from the peak of the data amplitude spectrum to the average noise amplitude, is of the order of 40 dB for adequate quality marine seismic data, and is less, typically 30 dB or below, for land seismic data.
The low frequency spectrum of the wavelet is primarily a function of the acquisition parameters—the low frequency response of the seismic source, detector and recording instruments, plus, in marine seismic, the “ghost” effect related to the depth of the marine seismic source and streamer. Taken together these effects constitute a low cut filter with a cut-off point that is typically in the region of 5 to 15 Hz and a low frequency roll-off of 20 to 30 dB/octave.
The high frequency spectrum, on the other hand, is primarily determined by the loss of high frequency energy caused by scattering or absorption on propagation, collectively referred to as the “earth Q-filter”. Typical recorded data, before processing, exhibits a linear logarithmic decay of amplitude versus frequency at a particular travel time, with the slope of the decay characterized by the seismic quality factor Q. Average Q values for two-way propagation from the surface to reflector are usually between 100 to 200, varying primarily with the geographical location of the survey.
In typical quality marine data with a dynamic range of 40 dB and a Q value of 150, the signal amplitude at high frequencies reaches the noise level at approximately 110 Hz for a reflection time of 1 s, and at approximately 37 Hz for a reflection time of 3 s. The high frequency cut-off occurs at a lower frequency at higher time delays in the seismic trace because these times correspond to signals that have propagated further in the Earth, and hence are relatively more significantly attenuated at the higher frequencies. In poorer quality data or for lower Q values these frequency cut-off values may be significantly reduced with an associated reduction in interpretation resolution.
The level of the data noise and the low and high frequency spectra of the wavelet are the primary factors that determine the useable range of frequencies over which wavelet inverse processing is able to flatten the wavelet amplitude spectrum. Since these properties of the recorded data may vary in an unknown manner from survey to survey, and also from trace to trace throughout a survey area, typically processing tests must be used to establish the degree of spectral flattening that is possible before the noise amplification becomes unacceptable.
It is the noise content of the data in particular that is the limiting constraint on the improvement in resolution which is achievable by wavelet inverse processing. Resolution can be further enhanced when using this approach only by processes such as common mid-point (CMP) stacking that seek to improve the relative signal to noise level of the data using well known statistical summing techniques.
There has, however, been a significant amount of research into alternative methods for extending the usable frequency range of data over the last decade or so in areas such as speech and music reproduction, telephony, loudspeaker design and audio data compression. General references for this are, for example, Aarts et al (2003) [12] and Larsen and Aarts (2004) [13]. The techniques that have been developed in those fields do however often rely on specific statistical and/or physical properties of the input and as such do not necessarily have direct applicability to seismic data; speech and music have a harmonic structure, for example, which has been exploited for bandwidth extension on both the low and high end of the input spectrum.
Mpeg decompression exploits this harmonic structure to recreate higher frequencies from low frequencies by a wavelet transform (“filterbank”) method (Dietz et al, 2002 [14], Hsu et al 2006 [15]). Mpeg decompression, unlike the equivalent seismic problem, is not a “blind” technique, since information extracted during the compression phase can be used during decompression. Related wavelet transform methods have, however, been proposed for seismic bandwidth extension. Smith et al (2008) [16] seek to exploit a purported harmonic structure in seismic data caused by thin-bed interference by a wavelet transform-based generation of high frequencies; Devi and Schwab (2009) [17] similarly exploit the scaling behavior of the wavelet transform to derive higher frequency information from band-limited data. A related approach is described by Puryear and Castagna (2008) [18], who present an algorithm for calculation of bed thickness and reflection coefficients from amplitude spectra, and use this to invert wavelet transform data for layer thickness.
A different approach has been described by Portniaguine and Castagna et al (2005) [24] and Chopra et al (2006) [25] which the authors call “spectral inversion” or “thin-bed reflectivity inversion”. Although not directly addressing the issue of bandwidth extension of the seismic trace as such, this derives what are termed high resolution estimates of seismic reflectivity. The authors describe their approach as “a form of sparse-spike inversion in that it outputs a sparse reflectivity series”, or “a post-stack spectral inversion method”. The approach appears to be a type of matching pursuits decomposition (“exponential pursuit decomposition”) in which the basis functions for the decomposition are a set of positive and negative thin-layer dipoles, convolved with the seismic wavelet.
Their method is derived from earlier work by Castagna et al (2003) [19] and Portniaguine and Castagna (2004) [20], as originally proposed by Chakraborty and Okaya (1995) [21], wherein a form of high resolution spectral decomposition (“instantaneous spectral analysis”) is performed using a variant of a matching pursuits algorithm; this is described in U.S. Pat. No. 6,985,815 [22] and also dealt with in Castagna and Sun (2006) [23]. Several other authors have also suggested matching pursuits and related algorithms for the derivation of high resolution seismic reflectivity estimates. These include Herrmann (2003) [26], Wang (2007) [27], Broadhead and Tonellot (2010) [28] and Zhang and Castagna (2011) [29].
Although these techniques aim to derive a high resolution version of a reflectivity series they do not directly address the question of increasing the frequency content of the data itself. There therefore remains a need for new techniques for processing seismic survey data with a view to increasing the frequency content/bandwidth of the data in order to enhance the resolution of events on the seismic trace.