Oil and natural gas are crucial commodities in the world's supply of energy resources. As such, the location and utilization of subsurface resources is an important activity in the energy industry, with several companies dedicating significant resources to the location and extraction of oil and natural gas from beneath the earth's surface.
To locate an oil reservoir, researchers use various techniques. One such technique is volumetric seismic data mapping. Seismic data is obtained by generating seismic source waves that are transmitted into the earth and reflected. The reflected signals can be recorded and computationally processed to allow researchers to visualize the volume of these materials in three dimensions. This information, in turn, allows researchers to predict where hydrocarbons might be found below the surface of a region. Recent technological advances have allowed researchers to visualize and track seismic volumetric data through the display of complex, virtual three-dimensional images on interactive machines.
Typically, seismic data comprises a large collection of seismic traces, each trace representing the acoustic signal detected by a remote sensor after the signal has been transmitted by a seismic source and passed through the subsurface. A number of seismic traces can be generated from a single sensor by moving the seismic source or using additional seismic sources at different locations. This collection of seismic traces can image a broad area. A researcher or processer might sort the traces into one or more types of gather. Gathers made up of traces that are processed and selected to image the same location in the earth are then stacked to form an output 3D seismic volume. Stacked 3D seismic volumes are generally used by seismic interpreters to help predict whether that subsurface region contains hydrocarbons. Various imaging algorithms might be employed before or after stacking that blend the data across traces in complex ways to improve the final image.
Although this approach is an intuitive method for inspecting the quality of potential drill sites, there are some associated limitations. The quality of the underlying data organized into a stack determines whether the stack itself offers useful information. In some cases, the seismic sensors will record false signals or “noise” that may negatively impact the quality of the final stacked 3D seismic volume. For example, several subsurface materials may reflect the same wave from a seismic source multiple times before it reaches a data sensor. In areas of complex geology, various traces may contain significantly less signal because the majority of the acoustic energy is reflected or refracted away from your sensor or they may be contaminated by various noises that make the original signal difficult to distinguish.
It is often difficult to acquire clean data that reflects the underlying sediments in complex geologic areas. When exploring for hydrocarbons in such areas, it is common to employ “wide azimuth” scanning, which involves a number of techniques to shoot seismic over the same area, but from different directions. Such datasets can be treated individually, but when they are instead composited into a single dataset, this becomes a Wide AZimuth (WAZ) dataset. Recently, high-channel-count recording systems and high-productivity vibroseis techniques have created a revolution in onshore 3D seismic productivity, enabling the move from sparse to high-density WAZ acquisition and multiplying the data volume to include data in five dimensions: inline, crossline, offset, azimuth and time. Nonetheless, seismic interpretation systems tend to use the data in a single-fold three-dimensional arrangement. The quality of data may be further compromised when there are near surface scatterers, in the case that salt or basalt is covering reservoirs, or in any one of a number of other situations.
While simple stacking of all the data is usually an improvement over a single azimuth stack, a better result can be obtained by separating out those traces that do not have sufficient signal and exclude them from the final stack.
Existing attempts to address this problem generally take a mathematical approach. Researchers may develop and employ complex mathematical algorithms that seek to automate the process of identifying which traces contain signal and which do not and then exclude low signal traces from the final stack. While generally successful, it is not always possible for such algorithms to correctly decide which traces have signal and which have noise, particularly in complex areas.
Other, less sophisticated approaches for managing noisy data are also available but their usefulness is inherently limited. One approach is for researchers to include only data traces from one sector (e.g. the northeast) into the final stack and exclude all others. It is not clear that any one of the sector stacks is optimal and the practical matter of interpreting from multiple datasets is problematic.
Thus, creating an optimal stack is a persistent problem in the field of seismic interpretation and increases the challenge of locating valuable subsurface energy resources. The prospect of optimizing the stack by excluding those traces with insufficient signal from large arrays of seismic data would allow for a significantly improved image of the subsurface.