1. Field of the Invention
The present invention relates to a method for more rapidly forming a stochastic numerical model of Gaussian or a related type, representative of a porous heterogeneous medium (such as a hydrocarbon reservoir for example) calibrated in relation to data referred to as dynamic data, characteristic of the displacement of fluids in the medium such as, for example, production data (pressures obtained from well tests, flow rates, etc.).
2. Description of the Prior Art
Optimization in a stochastic context consists in determining realizations of a stochastic model which meet a set of data observed in the field. In reservoir engineering, the realizations to be identified correspond to representations, in the reservoir field, of the distribution of carrying properties such as the permeability or porosity. These realizations form numerical reservoir models. The available data are, for example, isolated permeability or porosity measurements, a spatial variability model determined according to isolated measurements or data directly related to the fluid flows in an underground reservoir, that is pressures, breakthrough times, flow rates, etc. The data are often not linearly related to the physical properties to be modelled. A randomly drawn realization is generally not in accordance with the whole of the data collected. Coherence in relation to the data is integrated in the model by means of an inverse procedure See
Tarantola, A., “Inverse Problem Theory—Methods for Data Fitting and Model Parameter Estimation”, Elsevier Science Publishers, 1987.
The simplest technique is therefore the trial-and-error method. This approach randomly takes realizations until a realization meeting the data is obtained. It affords the advantage of conservation of the spatial variability of the model but requires prohibitive calculation time. It is therefore very rarely used in practice.
An option that is often preferred is based on gradients calculation. The gradients methods allow modifying an initial realization in a direction of search which is estimated from the gradients. The modifications are applied iteratively until data calibration is considered to be acceptable. The gradients methods are attractive because of their efficiency. However, they are no longer suitable when the number of parameters, that is the number of permeability and porosity values forming the numerical model, is large. Besides, they do not allow modifying the realizations while respecting the spatial structure of the stochastic model.
More recently, a geostatistical parameterization technique has been introduced to constrain, by gradual deformation, the stochastic realizations to data on which they depend non-linearly. This technique is described in French patents 2,780,798 and 2,795,841 of the assignee, and in the following publications, notably:
Hu, L. Y., 2000, Gradual Deformation and Interative Calibration of Gaussian-Related Stochastic Models: Math. Geology, Vol. 32, No. 1,
Le Ravalec, M. Et al., 2000, The FFT Moving Average (FFT-MA) Generator: An Efficient Numercial Method for Generating and Conditioning Gaussian Simulations: Math. Geology, Vol. 32, No. 6,
Hu, L. Y., Blanc, G. And Noetinger, B. (2001): Gradual Deformation and Interative Calibration of Sequential Stochastic Simulations. Math. Geology, Vol. 33, No. 4.
This method has been successfully applied in various cases, notably from data obtained from oil development fields, as described in the following documents:
Roggero, F. et al., 1998, Gradual Deformation of Continuous Geostatistical Models for History Matching, paper SPE 49004: Proc. SPE Annual Technical Conference and Exhibition, New Orleans,
Hu, L. Y. et al., 1998, Constraining a Reservoir Facies Model to Dynamic Data Using a Gradual Deformation Method, paper B-01: Proc. 6th European Conference on Mathematics of Oil Recovery (ECMOR VI), 8–11 Sep. 1998, Peebles, Scotland.
As described in detail hereafter, the gradual deformation method allows gradual modification a realization of a stochastic model of Gaussian or a Gaussian-related type while respecting the spatial structure of the model.
Stochastic Optimization
Let fobs=(f1obs, f2obs, fMobs) be the field data and f=(f1, f2, . . . , fM) the corresponding responses simulated for a realization z of a given stochastic model Z. In general, the responses f=(f1, f2, . . . , fM) are obtained by solving numerically the direct problem. Thus, if z represents a permeability field, data f can be pressure measurements. In this case, they are simulated from a flow simulator. The goal of a stochastic optimization is to produce realizations of Z which reduce the differences between the observed data and the numerically simulated corresponding responses. These differences are measured by the following objective function:
  J  =            1      2        ⁢                  ∑                  m          =          1                M            ⁢                          ⁢                                    ω            m                    ⁡                      (                                          f                m                            -                              f                m                obs                                      )                          2            
Coefficients ωm are weights assigned to data fmobs. fm which are functions of realization z. In this sense, minimization of the objective function is a problem with several variables.
Let N be the number of grid cells or of components forming realization z. N is often very large (104˜107). It is therefore very difficult to perform an optimization directly in relation to the components of z. Furthermore, realization z, even modified, must remain a realization of Z. The gradual deformation method allows these difficulties to be overcome.
Random Search from the Gradual Deformation Method
A random field Z is now considered that can be transformed into a Gaussian random field Y. The gradual deformation technique allows construction of a continuous chain of realizations by combining an initial realization y0 of Y with another realization u1, referred to as complementary, of Y, u1 being independent of y0 (FIG. 1a). The combination coefficients are for example cos(t) and sin(t), and the combined realization meets the relation:y(t)=y0 cos t+u1 sin t where t is the deformation parameter. Another realization chain construction technique combines the initial realization no longer with one, but with P complementary realizations up (p=1,P) of Y (FIG. 1b). The coefficients of the combination are such that the sum of their squares is 1.
Once the chain is formed, it can be explored by varying the deformation parameter t and while trying to identify, from among all the realizations of this chain, the realization which minimizes the objective function. This minimization is performed in relation to t. Parameterization according to the gradual deformation method allows reduction of the number of dimensions of the problem from N to 1, where N is the number of values that constitute the field to be constrained. Furthermore, the sum of the combination coefficients squared being 1, the optimized realization y(topt) still is a realization of Y: The optimized relation follows the same spatial variability model as all the realizations of Y.
However, if the exploration of the realization space is restricted to a single chain, the possibilities of sufficiently reducing the objective function are greatly limited. The above procedure therefore has to be repeated, but with new realization chains. These realization chains are constructed successively by combining an initial realization which is here the optimum realization determined at the previous iteration, with a complementary realization of Y, randomly drawn each time. Thus, at iteration l, the continuous realization chain is written as follows:y1(t)=y1-1 cos t+u1 sin t. 
y1-1 is the optimum realization defined at iteration l−1 and the u1 are independent realizations of Y. The latter are also independent of y0. This formulation implies that the realization chain corresponds to a hyperellipse of dimension N.
Minimizing the objective function in relation to t allows improving or at least to preserve calibration of the data each time a new realization chain is explored. This iterative minimum search procedure is continued as long as data calibration is not satisfactory. The random side of the method lies in the fact that, upon each iteration, the complementary realization is drawn at random. In fact, the direction of search that is followed from the optimized realization at the previous stage is random. The direction of search, for a given chain and from the optimum realization defined above, is:
                              ⅆ                                    y              l                        ⁡                          (              t              )                                                ⅆ          t                    )        0    =                              -                      y                          l              -              1                                      ⁢        sin        ⁢                                  ⁢        0            +                        u          l                ⁢        cos        ⁢                                  ⁢        0              ⁢                  ⁢                  ⁢                  =          u      l      
This direction of search only depends on u1. Furthermore, u1 being independent of the complementary realizations already generated u1, u2, . . . , u1-1 and also of y0, the direction of search at the start of each new chain is orthogonal to the tangent defined for the previous chain at the same point (FIG. 2). Although it may seem appropriate to select a direction of search orthogonal to this tangent, there is an infinite number of possible orthogonal directions.
Experience shows that, after several iterations, the new directions of search no longer contribute significantly to the decrease of the objective function (FIG. 6).
It has also been considered to combine the initial realization not only with one, but with several complementary realizations. In this case, the number of deformation parameters increases and it is equal to the number of complementary realizations involved in a gradual combination. Although the optimization process is then more flexible, several parameters have to be managed, which is not easy. Besides, such a process is not necessarily more efficient because it can depend on the execution of a larger number of direct flow simulations.