1. Field of the Invention
This invention relates in general to hydrocarbon production, and more particularly to methods, computer readable medium, apparatus, and program code, for estimating or otherwise determining permeability at unsampled, uncored but logged interval locations in a reservoir.
2. Description of the Related Art
Permeability is essential for calculating saturation height functions, building a three-dimensional model of the reservoir, and for forecasting field performance. In a typical oil or gas field, all boreholes are “logged” using electrical tools to measure petrophysical parameters such as porosity and density. A sample of these are cored, with the cored material used to measure permeability directly. Coring, however, is expensive and time consuming in comparison to the electric survey techniques most commonly used to gain information about permeability. A challenge to industry is, therefore, to accurately predict permeability in all bore holes by inference from the routinely logged electrical surveys and the more limited core information.
In principle, determining permeability from electrical measurements is a matter of solving equations in rock physics. In practice, there are numerous complicating factors that make a direct functional relationship difficult or impossible to determine. One problem is that permeability is related to the aperture of pore throats between rock grains, which logging tools do not directly measure. Even in the laboratory it is difficult to relate a log response to a physical parameter. Several parameters such as mineralogy, reservoir fluids, and drilling fluid invasion can influence the permeability measurement. Determining permeability from well logs is further complicated by the problem of scale: well logs typically have a vertical resolution of two feet compared to two inches in core plugs. Additionally, there are measurement errors on both the logs and core.
Traditionally permeability prediction has involved an integration of core and well log data. Core data from laboratory measured porosity and permeability are used in a regression analysis to derive a relationship between these two measurements. The relationship is then applied to well log porosity to derive permeability.
There are several limitations to this technique. For example, in carbonate reservoirs these simple relationships are often not valid because the permeability is not only a function of porosity but of several different parameters. Reservoirs such as those in the Middle East, for example, typically show a large porosity/permeability scatter. Correlations between permeability and porosity are generally poor and large variations of permeability values within a small porosity range are typical. A regression line on this data simply does not account for the large uncertainties in permeability prediction.
In addition, regression analyses introduce errors by averaging data that may not represent the reservoir permeability. That is, as a result of the data being averaged (i.e., smoothed), the high and the low permeability zones that have a great impact on the fluid flow are under-represented, and thus, the full effects of these “highways and barriers” are not incorporated into the simulation models. Additionally, permeability is not simply a function of porosity: many factors related to the depositional and diagenetic environment conditions play a role.
Several authors have proposed methods based on statistics, neural networks, and clustering in order to build predictive models. In formations with strong heterogeneity such as those having thinly laminated sands and shale, however, such methods would be expected to overestimate the minimum permeability, underestimate the maximum permeability, and make discrete, multi-modal permeability appear as a broadly uni-modal property. P. Rabiller (2001).
Although generally well-suited to problems involving complex relationships, in the case of Neural Networks, it is very difficult to determine how the net is making its decision. Consequently, it is hard to determine which features are important and useful for the classification, and which are worthless. Neural Networks algorithm requires multiple variables which most of the times cannot be set up by the user—i.e., initialization of the neural network, learning function, weighting function, iteration time. This is why it is generally considered to be a black box.
Cluster analysis seeks to overcome the limitations of prior methods by simultaneous incorporation of all of petrophysical attributes to determine permeability. In this type of analysis, log signatures are used to derive permeability and lithology by training data to laboratory measured core values, and through visual core descriptions, if available. The K-nearest neighbor (KNN) algorithm provides an analysis, which in contrast to the neural network black-box, can inform the user of which variable is the most important in the prediction due to the straightforward linear mapping. On the other hand, one of the most serious shortcomings of KNN is that the method is very sensitive to the presence of irrelevant parameters. Adding a single parameter that has a random value for all objects in which it is unable to separate the classes, can cause this method to underperform significantly.
Conventional software using KNN as the core engine for permeability prediction suffers from several drawbacks. These drawbacks include: arbitrary user inputs for the number of nearest neighbors, weights applied to the well log data, the normalization parameter Pk, distance power parameter β or those based on impure coal and loose criteria; a requirement of performing manual and time-consuming sensitivity analysis; and a manual and time-consuming well-by-well blind test. Current practice of permeability prediction using KNN allows only limited and manual cross-validation procedures. It is often done on a well-by-well basis, manually and on only on a few key wells.
Additional drawbacks include utilization of unclear quality control criteria or objective function, and unclear estimation of the prediction error. In the absence of clear quantitative target, the current interpretation and validation of the results remains very heuristic. Current use of KNN based permeability prediction suffers from traditional averaging artifacts. Highs values of permeability are underestimated and low values are overestimated.
Recognized, therefore, by the inventor is the need for methods, program code, computer readable media, and apparatus that can combine the existing KNN algorithm with a constrained nonlinear optimization algorithm. There are no current methods suggesting the use of an optimization algorithm to find the optimum inputs to the KNN based prediction, or ones which can improve on the conventional approach by finding the KNN prediction parameters leading to the best prediction with no human bias. Recognized also is the need for methods, program code, computer readable media, and apparatus that can define a clear objective function to minimize discrepancies between measured and predicted permeability and optimize all user inputs. Further, recognized is the need for methods, program code, computer readable media, and apparatus that can integrate a smoothing correction procedure to compensate for KNN inherent averaging artifacts.