Many natural resource industry activities involve exploration for resources, such as oil, within a subterranean formation. One commonly used exploration method is to analyze surface seismic data from receivers responding to a man-made shock to the subterranean formation, such as an explosive detonation. Different response modes, such as compression and shear waves, are detected by receivers, called seismometers or geophones. By analysis and comparing the characteristics of the measured signals (e.g., mode, amplitude, phase, frequency, and arrival times) to the characteristics of simulated or synthetic signals (derived from a numerical model), some potential resources of commercial interest can be identified. These commercially interesting portions of the subterranean structure may then be further explored or natural resources recovered by methods such as drilling.
The objective of the seismic method is to estimate the in-situ properties of one or more underground layers in a zone of commercial interest. These layers are typically sandwiched between shoulder beds immediately over- and under-lying the layers of commercial interest (i.e., the surrounding beds are not of commercial interest). In addition, seismic signals may have to travel through other layers not of commercial interest to reach the geophones. These layers can refract, change the signal's propagation speed and amplitude, change signal mode, attenuate transmitted signals and generate additional signals (including reverberatory events called multiples). The analysis of seismic signals must therefore distinguish between these many factors.
Seismic analysis has essentially been accomplished in 3 steps. Different layer parameters (e.g., density, compressional wave velocity and shear wave velocity) are first blocked into inferred discrete layers. The inferred layers may be a result of other information and data, such as well drilling core samples, well logs, and surface geological observations. An estimate of average density and wave propagation velocities (for each mode) within each inferred layer is used in conjunction with regression analysis to construct a discrete model representing the subsurface physical characteristics. The second step is to produce a synthetic seismic response from the discrete model and compare it to the actual field response. Erroneously modeled layers are detected by comparison discrepancies. The final step revises the discrete model and generates another synthetic seismic response. The process is iterated until comparison discrepancies are no longer significant.
One type of lithology, such as a simple sedimentary structure, contains nearly horizontal layers. However, existing seismic analysis methods must contend with many factors from even a simple lithology. These include: 1) multiple non-homogeneous layer interfaces within the zone of interest which attenuate and convert signals of interest and/or produce unwanted signals and wave conversions; and 2) intervening (e.g., near surface) layers producing additional signals (e.g., multiple reflections) which further attenuate and convert signals of interest from one mode to another.
Actual lithological structures are more complex than the aforementioned simple lithology. Complex structures create a profusion of difficult-to-interpret seismic data signals. A viscoelastic forward model to simulate every signal independent from their amplitude and attenuation would typically be uneconomically large and require uneconomic operating expenses (e.g., long run time on a computer). Therefore, discrete forward modeling simplifications are common.
One approach to model simplification is point-source ray tracing. This method concentrates on one or more important geological features (e.g., a layer of commercial interest). A modification of this approach models discrete event branching of signals, but continues to limit the number of geological features modeled. This branched approach may redundantly calculate a ray path if involved in several branches (i.e., events).
Another method may not limit the number of layers, but limits or suppresses types of numerical simulation (i.e., model) calculations. For example, surface waves, mode conversions, and multiple reflective types of calculations can be suppressed. Phase velocity and frequency attenuation windows can also narrow/limit calculations to further simplify the model.
However, model simplification creates problems when comparing a simplified synthetic response to actual seismic data. Limiting the number of layers or types of calculations can mask synthetic signals and/or prevent the generation of synthetic seismic signals from an exact subset of events. The simplified models also do not identify the contribution of each event to the signal, i.e., the model is not capable of fully identifying the nature, origin, and type of event causing some specific signal in the simulated response.
One current method attempts to exclude or filter out unwanted effects in the actual data so that comparison to the simplified model response can be made more easily. Filtering has involved limiting the frequency bandwidth of signals, the response time (e.g., avoiding first-to-arrive surface waves), or the number of events.
However, too much data filtering may mask geological features. Conversely, too much simplification of the model may produce results incapable of being matched with the actual or field seismic response data. In order to obtain a reasonable assurance of valid results, very little filtration and extensive calculations must be accomplished, e.g., a high absolute number of layers and a wide frequency band window to attain a high level of reliability. Even extensive calculations may not generate a proper simulation if even one critical simplification/suppression (e.g., a narrow time window) is included. Still further, the extensive calculations can be unreliable, generating extraneous signals.
Matching seismic data may also be difficult due to sensor accuracy/sensitivity. An extensive model may generate signals below the sensitivity of the data collection system. In addition, the repeatability/accuracy of the data collection system may not produce results which match the synthetic data from the model.
None of the current simplified model approaches known to the inventors have the capability of producing synthetic data with an exact subset of events specified by the user. Additionally, no other method is capable of identifying the nature, origin, and type of events in the simulated result to the extent possible in this method. Still further, the method eliminates the risk of masking an important part of the synthetic signals. Finally, no other simplified/suppressed numerical simulation is capable of accurately simulating the filtered seismic data with comparable computer usage efficiency.