In petroleum exploration, a seismic reflection survey is a common method for obtaining a seismic image of the subsurface. In this method, using appropriate energy sources, called emitters, acoustic waves are transmitted, travel in the subsurface to be explored, and are reflected on the different reflectors which it contains. The reflected waves are recorded, as a function of time, on adapted receivers disposed on the ground surface or in the water. Each recording (or trace) given by a receiver is then assigned to the location of the point which is situated at the middle of the segment connecting the source to the receiver. This operation is referred to as common midpoint gather.
A seismic prospecting technique, which is now conventional, is multiple coverage. In this technique, the sources or emitters and receivers are disposed on the ground surface in such a way that a given midpoint gathers several recordings. The series of recordings associated with the same midpoint forms what is generally called a common midpoint gather of recordings or traces. The set of gathers is associated with a series of different midpoints preferably located along the same line at the surface. Based on these gathers, seismic processing serves to obtain a seismic image in the vertical plane passing through all these midpoints. The arrival time of a recorded wave varies with the angle of incidence .theta., which is the angle between the normal to the reflector at the reflection point, called the mirror point, and the direction of the incident (descending) wave. For a given gather and a given mirror point, this angle varies for each recording as a function of the offset x of the receiver relatively to the midpoint. Making the conventional assumption of a homogeneous and isotropic subsurface, in plane and parallel layers, the reflections associated with each of the subsurface reflectors, observed on a common midpoint gather, are theoretically aligned along hyperbolas centered on the vertical to the midpoint and called time/distance curves. In order to build the stack of the recordings of each gather, it is necessary to apply a correction depending on time, called the normal-moveout correction, which is aimed to straighten the hyperbolas to bring them theoretically to the horizontal. Conventionally, the normal-moveout correction made is a correction based on the following equation: ##EQU1## where: x is the offset,
t.sub.0 is the zero-offset source/receiver reflection travel time, PA1 V, which is a function of time, is the average wave propagation velocity in the subsurface, and PA1 t is the travel time after reflection associated with a source/receiver pair for offset x. PA1 V.sub.1 is the wave propagation velocity in the first subsurface layer explored, PA1 t and t.sub.0 are the source/receiver reflection travel times for x offset and for zero offset respectively, PA1 t.sub.p, called the time of focusing depth, is the sum of the time t.sub.0 (time between the origin of the coordinates of the apex of the correction hyperbola) and the time between the origin of the coordinates and the centre of the hyperbola, of which the asymptote is controlled by a unique velocity. In the aforementioned formula, the term in x.sup.2 could be followed by at least one higher order term in x.sup.4 which is ignored. PA1 for the set of normal moveout-corrected gathers of traces, the maximum values of the positive and negative residues of the NMO correction are determined relative to a time t.sub.0 corresponding to zero offset: a time range analysis is then determined located on either side of said time t.sub.0 and of which the width is equal to at least the sum of the absolute values of said maximum values and at most to twice the absolute value of the maximum residual moveout, PA1 a family of 2n+1 residual correction hyperbolas or parabolas are constructed, each having its apex centred on said time t.sub.0 and, at the value of the maximum offset, presenting a value of time that is equal to one of the 2n+1 equidistant time values predetermined on the analysis range, and including the value t.sub.0 and the extreme values of said analysis range, PA1 2n+1 sets of static corrections are determined for each of the offsets, defined by the time differences presented relatively to said time t.sub.0 on the 2n+1 residual correction hyperbolas or parabolas, PA1 for each gather of traces, and successively to each of the traces and for each hyperbola or parabola, the static correction associated with the offset trace is applied, and PA1 the statically corrected traces are stacked together in order to obtain a set of 2n+1 stacked traces characterising a correction velocity field.
To make satisfactory NMO corrections, it is necessary to know the velocity distribution V(t) at each midpoint. To achieve this, velocity analyses are made at given locations on a limited number of common midpoint gathers. The results are then subjected to a double interpolation, in time for each of the analyses, each analysis only giving a maximum of some twenty velocity values associated with the same vertical, and in abscissa, between the analyses, said analyses being commonly performed only every 40 to 50 midpoints on the average, and an account of the fact that the analysis is made manually, thus implying relatively long analysis times and relatively high processing costs.
The conventional velocity analysis consists in applying constant velocities in succession to the common midpoint gathers, for the midpoints selected, to make the corresponding NMO corrections, and then in stacking the corrected traces for each of the velocities used and in manually retaining the velocities that lead to an energy peak of the stacked trace. The accuracy of the velocity field obtained by this process is insufficient for a large number of more sophisticated treatments applied to the prestack traces (recordings), for example migration, inversion, measurements of effects of variations in amplitude with offset (denoted by AVO for Amplitude Variation versus Offset), because these processes are distorted by the effects of the time and abscissa interpolations, by the inaccuracy of the velocity values selected, and by a signal distortion due to the NMO correction formula (1).
In an article entitled `Normal moveout revisited: Inhomogeneous media and curved interfaces`, published in the review Geophysics, Volume53, No.2, February 1988, pp.143-157, Eric de Bazelaire has developed another method of velocity analysis, used for building improved stacked sections, called `Polystack` sections. This analysis consists in constructing a document called BAP, which is associated with a common midpoint gather of recorded seismic traces.
The BAP associated with a common midpoint gather is another gather of traces, each of which is the result of the stacking of the traces of the common midpoint gather after the application to each of these traces of a static type of correction (independent of time) and different from one trace to another. These static corrections, which have the effect of shifting all the samples of a trace by the same time interval, are representative of predetermined curvatures. Said static corrections are defined by correction hyperbolas according to the following formula: ##EQU2## where: x is the offset,
Accordingly, the N traces of a BAP correspond to an investigation along N different moveouts or curvatures.
The BAP associated with a given common midpoint gather explores the whole field of possible hyperbolic curvatures for said midpoint, from the most concave at the bottom (low velocities) corresponding to low positive t.sub.p, to the most convex towards the top (imaginary velocities), corresponding to negative t.sub.p, and including the infinite velocity (t.sub.p =Y). Since the area scanned is very wide, each of the BAP has a large number of columns, generally more than 200.
Although it serves to obtain an accurate and continuous velocity field, the Polystack method nevertheless has a number of disadvantages. First, it is a costly method, because computer time is long due to the size of the BAP (more than 200 columns). Secondly, risks of instability exist due to automatic picking errors on the BAP. Thirdly, this method does not at all eliminate the undifferentiated multiples of the real events.