Abstract

Over the past 15 years high performance computing has had a signficant impact on the evolution of numerical predictive methods for flow in porous media involving improved recovery from hydrocarbon reservoirs. The complexity of these reservoir simulation models has led to computational requirements that have consistently taxed the fastest computers. This work discusses how current state-of-the-art parallel architectures have been investigated to allow models which more closely approach realistic simulations while emphasizing accuracy and efficiency of the models. Several modeling approaches have been investigated on different parallel architectures. These investigations have, in general, shown great promise for the use of massively parallel computers for reservoir simulations. Despite these results, reservoir simulation has been slow in moving toward parallel computing in a preoduction environment. There appear to be several reasons for this. First, the recursive natures of the existing linear solution techniques for reservoir modeling are not readily adaptable to massively parallel architectures. The trade-off between load balancing and global data structure has yet to be thoroughly investigated. Finally, the role of well and facility constraints and production optimization in massively parallel processing may lead to severe serial bottlenecks. Several approaches for the solution of these difficulties are presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call