Technical Field
Embodiments of the subject matter disclosed herein generally relate to the field of geophysical data acquisition and processing. In particular, the embodiments disclosed herein relate to processing data collected during a geophysical survey such as a seismic survey.
Discussion of the Background
Geophysical data is useful for a variety of applications such as weather and climate forecasting, environmental monitoring, agriculture, mining, and seismology. As the economic benefits of such data have been proven, and additional applications for geophysical data have been discovered and developed, the demand for localized, high-resolution, and cost-effective geophysical data has greatly increased. This trend is expected to continue.
For example, seismic data acquisition and processing may be used to generate a profile (image) of the geophysical structure under the ground (either on land or seabed). While this profile does not provide an exact location for oil and gas reservoirs, it suggests, to those trained in the field, the presence or absence of such reservoirs. Thus, providing a high-resolution image of the subsurface of the earth is important, for example, to those who need to determine where oil and gas reservoirs are located.
Traditionally, a land seismic survey system 10 capable of providing a high-resolution image of the subsurface of the earth is generally configured as illustrated in FIG. 1 (although many other configurations are used). System 10 includes plural receivers 12 and acquisition units 12a positioned over an area 13 of a subsurface to be explored, and in contact with the surface 14 of the ground. A number of vibroseismic or other types of sources 16 are also placed on surface 14 in an area 17, in a vicinity of area 13 of receivers 12. A recording device 18 is connected to a plurality of receivers 12 and placed, for example, in a station-truck 20. Each source 16 may be composed of a variable number of vibrators or explosive devices, and may include a local controller 22. A central controller 24 may be present to coordinate the shooting times of the sources 16. A GPS system 26 may be used to time-correlate sources 16 and receivers 12 and/or acquisition units 12a. 
With this configuration, sources 16 are controlled to generate seismic waves, and the receivers 12 record the waves reflected by the subsurface. The receivers 12 and acquisition units 12a may be connected to each other and the recording devices with cables 30. Alternately, the receivers 12 and acquisition units 12a can be paired as autonomous nodes that do not need the cables 30.
The purpose of seismic imaging is to generate high-resolution images of the subsurface from acoustic reflection measurements made by the receivers 12. Conventionally, as shown in FIG. 1, the plurality of seismic sources and receivers is distributed on the ground surface at a distance from each other. The sources 16 are activated to produce seismic waves that travel through the subsurface. These seismic waves undergo deviations as they propagate. They are refracted, reflected, and diffracted at the geological interfaces of the subsurface. Certain waves that have traveled through the subsurface are detected by the seismic receivers 12 and are recorded as a function of time in the form of signals (called traces).
Referring to FIG. 2, while continuing to refer to FIG. 1, the seismic sources 16 may be moved while the receivers 12 are rotated (i.e., “rolled”) from a trailing edge of the survey area 13 to a leading edge. Referring now to FIG. 3, while continuing to refer to FIGS. 1 and 2, moving the sources and receivers in the described manner provides a high density grid of source points 40 and recording points 50 over a large area with a limited number of sources 16 and receivers 12. Each source may be fired at a distinct time in order to enable each active receiver 16 to collect a unique trace for each source point 40 that is activated while it resides at the recording point 50. In some scenarios, billions of traces are collected and each trace corresponds to a midpoint 60 between a source point 40 and recording point 50. Although the intent is typically to place the sources 16 and receivers 12 at specific locations that provide a set of traces for midpoint locations 60 corresponding to a uniform grid, physical restrictions and other issues may result in non-uniform placement of the source points 40 and the receiver points 50, such as the locations shown in FIG. 3.
Recorded signals are typically processed by a migration operation to obtain an image of underground geological structures. The migration operation essentially reverses propagation of the recorded waves, resulting in convergence at the subsurface boundary. Typically, a stacking operation is performed on the migrated data, by summing energy derived from approximately the same point in the subsurface. The stacking operation increases the signal-to-noise ratio and the amplitude ratio between a primary reflection and subsequent reflections referred to as “multiples.” Conducting such a process is commonly referred to as a “Common Image Gather.” For example, assuming the subsurface is horizontally stratified with no lateral variation of acoustic velocities, traces from different pairs of source points and receiver points that illuminate at the same point in the subsurface for variable source-receiver distances (offsets) are those with a common midpoint 60 between the source and receiver (see FIG. 2). Ideally, such traces can be summed into a common mid-point stack trace that provides a stronger signal-to-noise ratio than an individual trace.
The waves reflected in the subsurface are recorded at arrival times that vary as a function of the offset and azimuth between the source and the receiver. For example, FIG. 4 is a 2D plot of a COCA cube that shows how the common midpoint arrival times (vertical axis) for a dipping event, display a sinusoidal timing response 70 as a function of offset (horizontal axis) and azimuth. The same phenomenon may be observed above subsurface layers, exhibiting azimuthal anisotropy, where the velocity of sound varies with azimuth. Therefore, before traces can be added, they should be corrected for these timing variations, bringing them to a common reference known as the zero offset trace. This correction may be made by a so-called moveout correction step. In the case of azimuthal anisotropy, this could be termed azimuthal moveout correction.
In general, it is considered that the time at which the same event is recorded varies as a function of the offset along a hyperbolic normal moveout (NMO) curve that depends on the average wave propagation velocity in the subsurface. For each time at zero offset, an NMO curve is determined by successive approximations of the velocity and an evaluation of the semblance of traces along the corresponding curve. The determination of NMO curves provides a means of correcting traces so as to align reflections on all traces (via a computed time shift) so that they can be “stacked” (i.e., summed).
However, most of the time, the NMO correction is not sufficiently precise and distortions remain. An additional correction is often made during a so-called residual moveout (RMO) step. In general, it may be assumed that the residual correction is of the parabolic type. On this subject, reference is made to the publication “Robust estimation of dense 3D stacking velocities from automated picking,” Franck Adler, Simon Brandwood, 69th Ann. Internat. Mtg., SEG 1999, Expanded Abstracts. The authors suggest an RMO correction defined by the equation:τ(x,t)=x2(V−2−Vref−2)/2t  (1)where τ is the RMO correction, x is the offset; t is the time at zero offset; Vref is a reference velocity function; and V is an updated velocity.
In order to provide additional data and image resolution the trend in seismic surveys is to pair seismic sources and receivers at increasing distances from each other and record traces at increasing offsets. As a result of the increased offsets, RMO curves become increasingly more difficult to describe and the parabolic model has often been found unsatisfactory. Furthermore, typical prior art models are unable to describe RMO distortions as a function of azimuth.
In addition to issues with RMO correction, many land and ocean bottom datasets suffer from high levels of noise, which makes the task of processing and interpretation difficult. This is more pronounced for low fold datasets. Modern single sensor high fold datasets can also exhibit high noise levels due to poor coupling and ground or mud roll. For such datasets, it can be pragmatic to reduce the noise level.
De-noising algorithms are generally split into two categories: those that are designed to remove random noise and those that remove coherent noise. The removal of random noise normally relies on the fact that, while the signal is predictable, the incoherent noise is not. This principle is the basis for fx prediction filtering (Canales, L. L., “Random noise reduction,” 54th SEG Annual International Meeting, Expanded Abstracts, 3, no. 1, 525-529, 1984), fx projection filtering (Soubaras, R., “Signal-preserving random noise attenuation by the F-X projection,” 64th SEG Annual International Meeting, Expanded Abstracts, 13, no. 1, 1576-1579, 1994), and other coherency driven techniques (for example, Gulunay et al., “Coherency enhancement on 3D seismic data by dip detection and dip selection,” 77th SEG Annual International Meeting, Expanded Abstracts, 2007).
Other de-noising algorithms attempt to mitigate the coherent noise by locating characteristics that distinguish it from the primary energy. For example, Radon demultiple makes the distinction that on normal moveout-corrected common mid-point (CMP) gathers, the primary energy is flat while the multiple energy curves downward (Hampson, D., “Inverse velocity stacking for multiple elimination,” Canadian Journal of Exp. Geophysics, 22, 44-55, 1986). Other coherent energy can be distinguished through modeling and subtraction (Le Meur et al., “Adaptive ground roll filtering,” 70th EAGE Conference & Exhibition, Expanded Abstracts, 2008).
For random noise, many attenuation algorithms require regularly sampled data. Thus, the irregular datasets need to be regularized prior to de-noising. The simplest method of achieving regularization is through flex binning, which duplicates traces from neighboring bins to fill holes in coverage. While this method ensures one trace per bin, the flex bin traces will often not be a good representation of what would have been recorded in those bins, particularly for data with significant dip. In addition, jitter can be apparent in the data due to irregular sampling within the bins. The application of traditional methods (such as fx prediction filtering) in such circumstances will be sub-optimal because the irregularity of the sampling makes the primary energy disjointed. As such, the primary energy will be smeared and detail will be lost.
In addition to de-noising of random noise, many other seismic processing techniques, including migration, benefit from placing strict requirements on the regular spatial distribution of traces. For example, as shown in FIG. 5, regularization can increase the number of traces with a common midpoint and often increases the resolution of the resulting image. As a consequence, datasets that do not fulfill these requirements, including most 3D land surveys, will suffer from poor processing results (Poole and Lecerf, “Effect of regularisation in the migration of time-lapse data”, First Break, 24, 25-31, 2006). Although not a substitute for well-sampled field data, data reconstruction or interpolation performed on pre-stacked seismic data can provide useful data preconditioning that allows processing techniques to work better, and hence provide a superior result.
For narrow-azimuth marine data, 2D and 3D reconstruction techniques have been successful, as the data have been relatively well sampled (Poole and Herrmann, “Multi-dimensional data regularization for modern acquisition geometries”, 69th EAGE Conference & Exhibition, Expanded Abstracts, P143, 2007). With wide azimuth geometries, such as those commonly used for 3D land and ocean bottom node (OBN) or cable (OBC) surveys, it is necessary to use an algorithm that respects all recording dimensions simultaneously. In addition to the inline and crossline directions, this means also using the offset and azimuth, or alternatively offset-x and offset-y directions.
The concept of interpolating simultaneously in all spatial domains was first introduced by Liu et al., 2004 (“Simultaneous interpolation of 4 spatial dimensions”, 74th SEG international conference). In Liu's paper, a minimum weighted norm method (MWNI) is used to derive a 5D Fourier transform of the data. The Fourier transform represents a continuous model, which can be used to reconstruct energy at any spatial coordinate. The method was further developed by Trad, 2009 (“Five-dimensional interpolation: Recovering from acquisition constraints” Geophysics, 74, V123). The anti-leakage Fourier transform (Xu et al., “Anti-leakage Fourier transform for seismic data regularization”, Geophysics, 70, 87-95 2005) was also extended to 5D by Poole, 2010 (“5D data reconstruction using the anti-leakage Fourier transform”, EAGE conference proceedings). Variants of the anti-leakage method, sometimes referred to as ‘matching pursuit,’ may also be used. (It should be noted that, as used herein, the term ‘anti-leakage’ relates to an iterative procedure wherein calculated model components are reverse transformed, and then subtracted from the input data, before additional model components are determined.)
While working in 5D enables the best possible chance of deriving an accurate Fourier representation of the data, a number of issues remain. For example, most 5D algorithms work on sub-cubes of data, where it is assumed that events are pseudo-linear. In the inline and crossline directions, this assumption is generally valid. While this assumption is also valid in the offset-x and offset-y directions, sampling in these directions is often so coarse that aliasing can become a problem. For this reason, it is often preferred to perform 5D reconstruction in the inline-crossline-offset-azimuth domain.
While windowing in the azimuth direction is possible to make event timing as a function of azimuth pseudo-linear, coarse input sampling often requires that a full 360-degree model be used. This also takes advantage of the fact that a continuation of data in azimuth dimension satisfies Fourier assumptions. However, by working with 360 degrees, the data is no longer linear, and linear sparseness constraints are no longer suitable.
The approach of Schonewille, “Regularization with azimuth time-shift correction”, SEG international conference 2003, can be adapted into the 5D data reconstruction scheme to reduce the impact of timing variations relating to dipping reflectors. This relates to deriving time shifts as a function of event dip and two way travel time to flatten the sinusoidal behavior shown in FIG. 4. The shifts may then be re-instated on the reverse transform, depending on the output coordinates. In a more complex geology, e.g., where there is anisotropy, this is less effective.
An alternative method to improving the signal-to-noise ratio of data by noise filtering or data interpolation is the ‘common-reflection surface’ (CRS) approach. This method derives a multi-parameter parametric description of the reflection timing of the data, which also provides a depth velocity model of the dataset. Such a concept is described by Jager, et al. 2001 (“Common-reflection-surface stack: Image and attributes”, Geophysics, Vol 66, pp 97-109). The common-reflection-surface parameters describe the travel times of the main events in the data and were initially used for stacking. Instead of stacking in the common midpoint domain after normal moveout, the common reflection surfaces (which describe events in space as well as offset) were used to improve stack coherence by stacking energy along surfaces in space as well as offset. This method proved very powerful at revealing weak reflections in areas of poor signal-to-noise ratio.
The use of CRS parameters was later extended to improve the coherence of pre-stack data through averaging or median filtering the data along the defined surfaces. Later still, Hoecht et al, 2009, (“Operator-orientated CRS interpolation”, Geophysical prospecting, V57, pp 959-979) used simple interpolation operators along the CRS surfaces for the purpose of 3D data interpolation. The common-reflection-surface parameters define the travel times of the main reflectors in the data, but are not a linear representation of the data itself. A similar method to CRS, ‘Multi-focusing’ is also in the literature; Berkovitch, et al 1994 (“Basic formula for multifocusing stack”, 56th Meeting, European Association of Exploration Geophysics, Expanded Abstracts, p. 140) and U.S. Pat. No. 5,103,429 (granted April 1992).
Despite all of the foregoing approaches, there is a long-standing need to provide a model of a seismic dataset that facilitates a variety of applications such as de-noising and regularization. Preferably, pre-processing seismic data and providing such a model would provide a framework for seismic processing that would simplify seismic processing and generate better seismic images.