Abstract

ABSTRACT Over the past fifteen years high performance computing has had a significant impact on the evolution of numerical predictive methods for improved recovery from hydrocarbon reservoirs. The complexity of reservoir simulation models has led to computational requirements that have consistently taxed the fastest computers. This work discusses how current state-of-the-art parallel architectures have been investigated to allow models which more closely approach realistic simulations while emphasizing accuracy and efficiency of the models. Several modeling approaches have been investigated on different parallel architectures. These investigations have, in general, shown great promise for the use of massively parallel computers for reservoir simulations. Despite these results, reservoir simulation has been slow in moving toward parallel computing in a production environment. There appear to be several reasons for this. First, the recursive natures of the existing linear solution techniques for reservoir modeling are not readily adaptable to massively parallel architectures. The trade-off between load balancing and global data structure has yet to be thoroughly investigated. Finally, the role of well and facility constraints and production optimization in massively parallel processing may lead to severe serial bottlenecks. Several approaches for the solution of these difficulties are presented. INTRODUCTION From the earliest stages of reservoir simulation, models have continued to tax the capabilities of the largest computers. From both a numerical and a physical standpoint larger and larger grids have been required to adequately model processes occurring the reservoirs. Beginning in the mid-1970's the introduction of supercomputing through vectorization completely changed the approach which had been taken toward development of numerical models for reservoir simulation. Although these computers significantly advanced the speed at which computations could be made, the availability of leveraging of computational power through vectorization led to significant reorganization and reworking of existing models. 1–3 Several publications in the literature have dealt with the application of parallel computing to petroleum reservoir simulation in shared memory parallel environments. Scott et a1.4 investigated the parallelization of the coefficient routines and linear equation solvers for a black-oil model on a Denekor HEP. Chien et a1.5 investigated compositional modeling in parallel on a CRAY X-MP 4/16. Barna and Home6 applied parallel computing using a nonlinear equation solver for the black-oil case on the Encore Multimax. Killough et al.7 looked at parallel linear equation solvers on both the eRAY X-MP and IBM 3090. Each of these applications involved the use of a shared-memory parallel computer. The question still remained whether a distributed memory architecture could be efficiently utilized for simulation of petroleum reservoirs. More recently parallelization of reservoir simulators has been accomplished on distributed memory parallel computers. These parallelizations have been accomplished on both multiple-instruction, multiple datapath (MIMD) and single-instruction, multiple datapath (SIMD) architectures. Work by van Daalen et al. 8 showed a speedup of a factor of forty on sixty processors on the Tranputer-based Meiko computer. Wheeler and Smith showed that black oil modeling could be performed efficiently on a hypercube. The application of compositional reservoir modeling to the distributed memory, message passing, INTEL iPSC/2 Hypercube was investigated by Killough and BhogeswaralO, ll.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call