Hydrocarbon reservoir simulation is one of the most powerful tools for guiding reservoir management decisions. Simulations are used for all stages, including planning early production wells, diagnosing problems with recovery techniques, and decisions regarding construction or overhaul of expensive surface facilities. Geologic complexity and the high cost of resource development continue to push reservoir simulation technology.
The earliest reservoir simulators date back to the 1930s when physical models were employed to understand behaviors of reservoirs that surprised the operator or misbehaved after years of production. Most often, the physical model was a vessel with clear sides that allowed viewing of the interactions between sand, oil and water. In addition to these physical models, electrical simulators relying on the analogy between flow of electrical current and flow of reservoir fluids were also available.
The early 1950s saw a transition from the physical models to analytically describing the reservoir using production information and other reservoir data. Thus, well-known principles such as the conservation of mass, fluid dynamics, and thermodynamic equations between phases were being applied to determine what was happening in the reservoir.
Generally, the equations governing a mathematical model of a hydrocarbon reservoir cannot be solved by analytical methods because the resulting partial differential equations are too complex, numerous, and nonlinear. Instead, numerical models were produced in a form amenable to solution by digital computers. Numerical models have been used since the 1950s to predict, understand, and optimize complex physical fluid flow processes in petroleum reservoirs.
In a numerical simulator, the reservoir is represented by a series of interconnected blocks, and the flow between the blocks is solved numerically. Most geologic models built for petroleum applications are in the form of a three-dimensional array of blocks (cells), to which geologic and/or geophysical properties such as lithology, porosity, acoustic impedance, permeability, and water saturation are assigned. The entire set of model blocks represents the subsurface earth volume of interest. The goal of the geologic-modeling process is to assign rock properties to each block in the geologic model. From there, the simulator itself computes fluid flow throughout the reservoir using partial differential equations that treat the reservoir volume as the numbered collection of blocks and the reservoir production period as specific time steps. Thus, the reservoir model is simulated in discretized in both space and time.
As computer power increased, engineers created bigger, more geologically realistic models requiring much greater data input. These realistic models, based on measurements taken in the field, including well logs, seismic surveys, structural and stratigraphic mapping and production history, often require advance computer systems for simulations.
Unfortunately, computerized modeling tends to be limited by the available software and computing architecture. In the early 2000s, the semiconductor industry settled on two main trajectories for designing quicker processors: multicore and many-core microprocessors. The multicore trajectory, e.g., central processing units (CPUs), maintains the execution speed of sequential programs while moving into multiple cores. In contrast, the many-core trajectory, e.g., CPUs and graphics processing units (GPUs), focuses on the execution throughput of parallel applications. Parallel computing operates on the principle that large problems like reservoir simulation can be broken down into smaller ones that are then solved concurrently.
A large performance gap has arisen between multicore and many-core microprocessors. As of 2009, the ratio between many-core GPU and multicore CPU for peak floating-point calculation throughput is about 10 to 1. This difference in computing ability is attributed largely to execution of instructions. The shift from serial processing to parallel systems is a direct result of the drive for improved computational performance.
Originally, GPUs were purely for pixel processing for gaming and similar industries. However, the advent of CUDA, Brook and OpenCL programming platforms has widened the use of GPUs to general-purpose calculations.
Current many-core technology utilizing GPUs on Single Instruction Multiple Data (SIMD) applications have shown some improvements in simulator performance, especially for seismic processing. However, for reservoir simulation, the major bottleneck is the need for both linear and nonlinear solvers, which are not parallel computation friendly.
While many existing simulation tools for linear and nonlinear solutions have been accelerated by the use of generic parallel software packages running on clusters of computers, the transition to many-core technology has more than surpassed the capabilities of these standard package solutions. Parallel programming requires a non-trivial distribution of task and data. Developers have to manually identify which simulation tools are appropriate for the GPU and to keep track of computations running on both the CPU and GPU cores. Additionally, developers have to manually initiate and manage data transfers between the two. This is a tedious and error prone process resulting in the developers having a hard time implementing their applications effectively.
In the case of reservoir simulation, the success of efficient GPU implementations is limited to spatial and temporal dependencies established by the discretized equations governing flow phenomena. Traditionally, each simulation may require a different approach to parallelization based on the physics of the problem (e.g., black-oil, compositional, thermal), the numerical formulation (e.g. degree of implicitness, type of spatial discretization and meshing), the input data (e.g., reservoir geometry, heterogeneity) and user supplied options (e.g., timestep control, flash calculations).
Most of the work in this area has focused on improving linear solver components. Solvers are used for the calculation of flow within the reservoir, which is the most difficult part of the simulation. US20120203515 discloses a heterogeneous (hybrid) computer environments composed of both CPUs and GPUs for processing iterative linear solutions. US20100082724 discloses a parallel-computing iterative solver that employs a preconditioning algorithm for modeling a large sparse system of linear system of equations.
Numerous factors are driving current production simulation planning to produce accurate results in the shortest possible time. These include remote locations, geologic complexity, complex well trajectories, enhanced recovery schemes, heavy-oil recover and unconventional gas. Operators now want accurate simulations of the field from formation discovery through secondary recovery and final abandonment. However, the current software and hardware configurations are limiting the turn around time of simulations.
Thus, what is needed in the art is a framework to perform the massive amounts of parallel computing inherent in reservoir simulations. Ideally, this apparatus would significantly reduce the turnaround time for computations. Furthermore, the framework would be capable of performing both linear and non-linear solver operations without affecting simulation time.
Upscaling has been around for quite some time. However, despite some important advances on this field, there are still a set of key questions open to make upscaling an ultimate solution for speeding up simulations. For example, here is a list of some issues:
1. There is no known upscaling method robust/reliable enough to hold for varying boundary conditions. That is, upscaling is usually carried out for a given set of flow boundary conditions, well location and rates and operating conditions. Once any of these conditions are changed during the life of a reservoir, the upscaled model may not be representative of the original fine flow model.2. Upscaling presents important shortcomings to handle complex geologies and physics. The dynamic aspects of the reservoir have been handled to a certain extent for the purpose of upscaling. It is still unclear how to perform upscaling for compositional flow, chemical transport, combined flow and geomechanical effects or EOR processes in general.3. Upscaling may introduce scaling effects in optimization, uncertainty quantification and decision making That is, selected models and decisions may be biased to scaling effects that are hard to characterize or afford in a study cycle time.4. Even if upscaling is a suitable approach, there still a need to speedup fine scale simulations to construct, validate and verify the upscaled model.
It is clear that both upscaling and parallel computing should benefit if the underlying governing physics is understood despite the degree of uncertainty. In this case, upscaling can be use adaptively when is required (i.e., local and adaptive grid refinement) and parallel computing could be employed to balance and speedup the overall computation time. In the proposed invention, the idea is that parallel computing is used in a “smart” way in the form of hybrid computing and adaptive algorithms with the aid of GPUs.