In seismic data gathering, it is conventional to position a plurality of seismic receivers along the surface of the earth above the formation of interest at locations spaced by a determinable distance. Alternatively, these receivers may be placed downhole.
A plurality of seismic sources disposed at spaced locations on the earth's surface or downhole can then be activated to generate seismic waves which propagate outwardly in all directions. Vibrating devices, explosives, and impulsive devices are examples of some seismic sources. The seismic waves generated by these sources are reflected, refracted and diffracted by interfaces within the formation, and some of these diverted seismic waves are detected by the plurality of seismic receivers and can be processed as seismic signals. Each such receiver receives a signal, which is then recorded. The signals received by a receiver and then recorded are collectively called a trace. Each trace is comprised of more than one wavelet. The collection of recorded traces is referred to herein as "unfiltered seismic data." Such seismic traces can be displayed as seismic sections which contain information about the time, duration and intensity of the diverted seismic waves. The seismic sections can be studied to extrapolate information regarding the type and location of subsurface formations producing the seismic signals. This information can then be used to evaluate the subsurface formations for petroleum-bearing properties.
Seismic energy which has generally been reflected only once from a reflecting interface is referred to in the art as a "primary"; similarly, that which has been reflected more than once is referred to in the art as a "multiple." Such reverberating seismic energy can produce multiples for one or more reflecting interfaces in the earth. The presence of multiples in the seismic data can result in confusing and possibly non-interpretable data where the multiples mask underlying primary reflections. Water bottom multiples and multiple reflections often destructively interfere with the primary reflections of interest. Accordingly, the art has developed various methods of attenuating or removing the multiples, with a goal of minimizing the distortion of the underlying primaries.
There are various standard techniques in the art for removing these multiples from unfiltered seismic data. These standard techniques can be classified into four general categories: common mid point ("CMP" or common depth point or CDP) stacking, model-based filters, predictive deconvolution and velocity filters. As is known in the art, the term "filter" means a method for removing seismic noise, generally via an algorithm. CMP stacking is also a velocity filter, but requires only general velocity information. Most velocity filters require specific velocity information.
Probably the simplest and most popular multiple suppression method used is CMP stacking. Since multiples tend to have different velocities from primaries, normal-moveout correction of the primaries causes stacking to reduce statistically the relative amplitude of the multiples. Weighting the amplitudes of the data prior to stack can improve the multiple reduction. This method is called linear-weighted stacking. It causes a relative amplitude reduction in the near-offset traces prior to stack. Since near-offset trace multiples are the primary contaminant in the stacked section, linear weighted stacking produces strong multiple suppression. Unfortunately, this method also suppresses or alters primary amplitudes.
Similar results occur for the progenitor of linear weighted stacking, which is called near-offset trace muting. In this method, near-offset amplitudes are completely removed. This can obviously strongly suppress or eliminate primary events.
Model-based filters attempt to remove the effect of surface reflections which cause multiples by mathematically removing these surfaces from the data. These filters are based upon the wave equation and can be expensive and difficult to use.
Predictive deconvolution is a method typically used to suppress short-period multiples. This deconvolution uses uniform periodicity to predict and remove multiples. The method is less successful, however, on medium to long-period multiples because of loss of uniformity. That is, with each consecutive repetition of the primary signal (multiple), the shape of the curve defined by the corresponding wavelet in the traces distorts slightly. Thus, the farther in time a given multiple is detected after the primary signal, the less like the original signal it will appear.
Out of the many different types of velocity filters, the f-k filter is one of the most common. It transforms data into a space in which events are separated based on velocity and removes unwanted events. It requires only specific velocity information in order to function. In contrast to predictive deconvolution, the f-k filter can be highly effective in suppressing long-period multiples, but less effective with medium and short-period multiples. This is due to the decreasing velocity separation between multiples and primaries as the multiple period shortens.
Some of the most effective velocity filters not only require specific velocity information, but perform some statistical measurement on the data. They derive a statistical estimate of the multiples which is then subtracted from the unfiltered data. An example of a commonly used statistical velocity filter is the median filter. Most velocity filters of this type strongly attenuate multiples, but distort primary events.
The method of this invention is most similar to median filtering, which is one of the most commonly used time-domain techniques for suppressing coherent noise. To implement this method, the seismic or borehole data is first horizontally aligned. The median filter is applied along the space axis. The waves are smoothed and emphasized, while other events are attenuated. After removing the horizontal alignment, the median filtered data are then subtracted, on a trace by trace basis, from the unfiltered data. The result is a rejection of multiples. Median filtering was first used in the area of speech processing. It removes amplitude spikes without destroying rapid changes in information. This effectively smoothes images without affecting resolution. The median value of n statistical data is defined as the sample in the (n+1)/2 position of the sequence when the data are arranged in ascending order of magnitude. The median filter rejects spikes and passes step functions.
The tau-p filter, another type of velocity filter, is similar to the f-k filter and yields similar results. These filters transform the data into a space in which events are separated based on velocity. They require only specific velocity information in order to function.
A combination of methods can also be effective and is often used. For instance, f-k filtering often does not remove multiples on the near-offset traces, so a near-offset trace mute or linear weighting can be run after the f-k filter.
Although multiples usually have different velocities from the primary reflections, their moveout is often non-hyperbolic. The amplitude and frequency content of such events may also vary with offset. Such non-uniform behavior can make these events difficult to filter from data without distorting or suppressing primary reflections. In fact, it is both non-uniformity and non-locality which inhibit most of the standard multiple filters. Because these filters take a global approach, they encounter more non-uniformity and are less likely to attenuate the multiples. On the other hand, those filters which take both a global and a local approach, such as median filters, do well in attenuating the multiples, but tend to distort the primary events. Filters of the latter type have the proper approach, but not the mechanism needed to adequately discriminate between multiple and primary events.
The filter of this invention addresses this problem by using a statistical operator to turn the filter on and off. After the multiples have been horizontally aligned as much as possible, the operator measures both the amplitude and character of neighboring wavelets to determine whether they are multiples. Once the multiples are located, the filter attenuates them by subtracting neighboring wavelets within a moving space-time window. This filter strongly attenuates multiples while avoiding primary events. It does this by removing neighboring, horizontally coherent events and is applied to pre-stack seismic data which is either common midpoint ordered or shot ordered.
It is an object of this invention to provide a filter which effectively removes or attenuates coherent noise from seismic or borehole data so that underlying primary signals may be viewed.
It is a further object of this invention to remove or attenuate such multiples with minimal distortion of primary signals.
It is a further object of this invention to provide a method for filtering multiples which allows a decision to be made whether or not to subtract wavelets based upon the similarity of the horizontally aligned wavelets.
Other objects of this invention will be apparent to one skilled in the art from review of the specification, figures and claims herein.