Modern seismic techniques include the use of data acquisition devices spaced apart at regular intervals over a distance, typically several kilometers. The data acquisition devices collect seismic signals picked up by one or more appropriate receivers (hydrophones or geophones) in response to vibrations transmitted into the ground by a seismic source and reflected back by the discontinuities of the subsoil. The signals which the receivers collect are sampled, digitized, and stored in a memory before transmission of the data to a central control and recording facility or station.
The data acquisition devices may be connected to the central control and recording station by common cables or other means adapted for transmitting both control and test signals and the accumulated seismic data. The various data acquisition devices are interrogated in sequence by the central station and in response the data acquisition devices transmit accumulated data to the central station.
Such a system is described for example in the U.S. Pat. No. 4,398,271. The different acquisition devices may be connected to the central station by a short-wave link. Each of them is then associated with radio equipment. The collected data may be transmitted to the central station in real time and simultaneously for all the acquisition devices. This requires the use and therefore the availability of a large number of wide or narrow band short-wave transmission channels of different frequencies.
Transmission of the data collected by the acquisition devices may also be made sequentially, each of them transmitting in turn its own data either directly to the central laboratory or through other intermediate acquisition devices or relay elements. Recording means are then used for storing the collected data for the time required for their sequential transfer to the central station. Short wave link seismic data transmission systems are described for example in the U.S. Pat. No. 4,583,206.
As new methods of interpreting three dimensional seismic data increase in popularity, the management of ever-larger data volumes becomes critical compared with acquisition and processing. However, the interpretation and use of seismic data requires faster and non-sequential, random access to large data volumes. In addition, quantitative interpretations lead to an increasing need for full 32-bit resolution of amplitudes, rather than the 8 or 16 bit representations that have been used in most current interpretation systems.
Seismic data compression can be a significant tool in managing these data, but “lossy” data compression techniques by definition introduce errors in the recovered or reconstituted images. In fact, several problems regarding image definition arise when using lossy wavelet-transform data compression algorithms currently available. Yet wavelet compression introduces less noise than currently accepted truncation compression. Compressing small blocks of data needed for random access leads to artifacts in the data, and such artifacts must be eliminated for maximum utility in the data acquisition and interpretation system.
Applications of wavelet-transform-based data compression in areas of seismic acquisition, transmission, storage, and processing have been proposed over the past several years. Most of such applications have been concerned with establishing the validity of lossy compression algorithms, particularly when seismic processing is to be carried out on previously compressed data. Most of these applications have been devoted to pre-stack data sets, where the data volume has been so large that the benefits of data compression would be most important. It is now becoming accepted that wavelet-transform or similar noisy data compression algorithms can be very useful in most of these applications, if careful analysis of the effects of compression noise is carried out. Diagnostic standards are currently being developed to permit the use of compression in many areas with fill confidence that compression noise will not degrade data quality in any significant manner.
While these concepts may seem theoretical in nature, they become increasingly significant when applied to operational systems for the acquisition of seismic data. In a typical modern seismic acquisition system, whether land, marine, or transition zone, a number of acquisition units are distributed over the area of interest, as previously described. Each acquisition unit is attached to one or more sensors. Each acquisition unit is capable of measuring the sensor signals over a period of time called a record, and sampling the measurements to create a data record. The record is coordinated between the central control and recording facility or station, which may be referred to in this disclosure as a central unit, to occur in synchronism with the activation of an energy source. The resulting subterranean echoes are the desired seismic data. The acquisition units then use their built-in telemetry capabilities to transmit the data some time afterwards to the central unit. The central unit may send the data to any combination of archival tapes, local pre-processing systems, or via some satellite telemetry to a remote office.
For state of the art distributed digital seismic data telemetric systems, telemetry bandwidth is traded off for total number of channels, distance between repeaters, total length of the system, power consumption, equipment weight, total data throughput, and data reliability. Each of these factors in turn determines the efficiency and cost of a seismic survey. For example, higher bandwidth increases the total number of channels which may be transmitted in the given length of time between seismic shots, or records. Higher bandwidth increases the number of channels able which may be carried on a line segment of the system.
On the other hand, higher bandwidth typically decreases the distance allowed between repeaters, thus requiring more repeaters in the system. Higher bandwidth also generally increases the power required for the repeater, besides having to power more of them. Greater power consumption requires larger and heavier power sources or wire gauges making the equipment less efficient to operate. This factor also affects the distances between repeaters and power sources. Further, replacing or recharging batteries or power sources because of the increased power demand increases service effort and therefore costs.
Aside from considerations of demands on the system structure, higher bandwidth increases the number and frequency of errors introduced in the seismic data, and therefore the computational load and additional bandwidth overhead for detection and correction of such errors. Reducing bandwidth requirements, while not compromising the useful information content of the data, would make seismic surveys less costly. One way to reduce bandwidth requirements is to use data compression.
Data compression reduces the total amount of data required to convey the same information. It is well known in the art of digital data processing that there are a number of schemes for both lossless and lossy data compression. A data block, such as a file, may be run through a process of compression to reduce it to a smaller block for storage or transmission. A reverse process, decompression, will return the data block to its original form so that it may be manipulated. Lossless compression assures that the digital data recovered is an exact representation of the original data, but is limited in data reduction ability. This kind of data is used for data files in which no bits may be changed or the exact meaning may be lost, such as computer programs, financial records, word processing, and other similar applications. Lossy data compression, on the other hand, yields much greater reduction of data in the compressed state, but the recovered data will not be an exact representation of the original. This is useful for data whose ultimate destination is to be an analog of a graphical representation such as sound or visual recordings, where keeping key audible or visible features retains the important audio or visual content. Seismic data falls into this category.
With lossy compression, a special parameter enters into the compression process. This parameter is called “Q”, a threshold and scaling factor used in lossy compression. This parameter, Q, is related to the compression factor, or compression ratio (how much the original data volume is reduced) and the amount of data loss. Increasing Q increases the compression ratio. For systems which use lossy data compression, the compression ratio determines that amount of error, i.e. noise, introduced in the compression/decompression process. Increasing the compression ratio also increases compression noise. Further, as previously mentioned, introduction of ambient noise in certain data acquisition and transmission systems is a fact of life. What is important is that the noise introduced in the compression process, or compression noise, be sufficiently small relative to the ambient noise, over which there is little or no control. Alternatively, the noise must be kept much smaller than the signals which are of principal interest in the system.
Thus, there yet remains a need for an efficient, implementable data compression system in a seismic data acquisition system in which compression noise may be varied or tuned so that compression noise is small in relation to ambient noise, and/or much smaller than the signals of principal interest.