Techniques to aid recovery of material from a reservoir include model-based simulation techniques. Reservoir simulators, in particular, have been developed to examine the flow of fluid such as oil and gas within a reservoir and from the reservoir. Reservoir simulators are generally built on reservoir models that include the petrophysical characteristics needed to understand the behavior of the fluids over time, and may be used to predict future reservoir production under a series of potential scenarios, such as drilling new wells, injecting various fluids or stimulation. Reservoir simulations may be used, for example, to identify optimal numbers and/or locations of wells, optimal completions of wells, efficacy of artificial lift and/or enhanced oil recovery, and/or expected production of recoverable fluids.
Reservoir simulations generally take into account existing wells, as the wells drilled into the same reservoir, and particularly in the same regions of a reservoir, generally have an interrelated effect on their respective fluid flows, pressures, etc. experienced by other wells.
As computing power has increased, so too has the sophistication and modeling capabilities of a reservoir simulator. Reservoirs are generally modeled as three-dimensional collections of cells, with each cell modeling one or more properties of a particular volume of the reservoir. Over time, a need has continued to exist for modeling a reservoir with increasingly finer resolution, as doing so generally leads to more accurate simulation results. As a result, cell sizes continue to decrease, leading to an increased number of cells in a reservoir model. Moreover, reservoir simulators are increasingly relied upon to model larger and larger reservoirs, further increasing the number of cells in a reservoir model.
As the number of cells in a reservoir increases, however, the amount of computational resources needed to perform reservoir simulations has also increased. Given that similar computations may need to be performed for individual cells in a reservoir model during a reservoir simulation, parallel processing techniques may be used to perform these computations in parallel for different cells, thus decreasing the overall time needed to perform the simulations. High Performance Computing (HPC) computer systems, including supercomputers and other massively parallel computing systems, for example, are capable of devoting hundreds or thousands of individual processing resources to a complex reservoir simulation. Even for smaller and/or single-user computers such as workstations or desktop computers, however, multi-processor and/or multi-core processor architectures still provide ample opportunities for increased parallelism.
The performance of parallel reservoir simulations, however, can vary greatly based upon workload distribution among the processing resources performing the simulation. If some resources are overloaded with work, while other resources are sitting idle, the benefits of parallelism decrease. Moreover, the communication costs associated with communicating data between processing resources can decrease simulation performance, so whenever processing resources are needed to pass work or data between one another, performance is also adversely impacted.
For example, one aspect of many reservoir simulations involves the determination of a well index, or well transmissibility, for existing and/or potential wells coupled to a reservoir. To perform these “well solves”, wells are generally assigned to processing resources based on a round-robin distribution or based on heuristic techniques. However, it has been found that a poor distribution of well solves between processing resources may lead to load imbalance, high communication costs and overall poor simulation performance.
A need therefore exists in the art for an improved manner of allocating well solves between available processing resources in a reservoir simulation.