Abstract

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 167877, “Reservoir Simulations in a High-Performance Cloud-Computing Environment,” by Morgan Edward Eldred, Asma Aboubakr, Ahmed Abubakr Al-Emadi, Thomas James O’Reilly, Nedal Barghouti, and Abdollah Orangi, Maersk Oil, prepared for the 2014 SPE Intelligent Energy Conference and Exhibition, Utrecht, The Netherlands, 1–3 April. The paper has not been peer reviewed. In the upstream oil and gas industry, cloud computing is very immature because the industry has always been challenged by storage and computational capability. However, there is recent evidence for considering high-performance cloud computing (HPCC) because of the promise of benefits such as flexibility, accessibility, and cost reduction. HPCC may create an opportunity for small to midsized upstream companies that do not want to invest in the infrastructure needed for evaluating scientific applications. Project Overview The target of this project was to prove the concept of running simulation software in a high-performance computing cloud and use the findings to design a framework or methodology enabling companies to pursue business opportunities iteratively while learning along the way. The outcome of the methodology is a dynamic tactical and strategic roadmap that leverages trends in HPCC. Calculations and Results The following cases were run on a local cluster at an early stage for the purpose of run validations: HP_ICLOUD on reference workstation with one central processing unit (CPU) HP_ICLOUD on the Enterprise Cloud (ECL) server with one CPU HP_ICLOUD_4 on the ECL server with four CPUs HP_ICLOUD_8 on the ECL server with eight CPUs The four cases showed identical results for oil-production rate and cumulative oil for the duration of the field history, as expected. The case with a single CPU was completed in approximately 20 hours. The run times with four and eight CPUs were 7.7 and 5.7 hours, respectively. Fig. 1 shows that wall-clock time decreased as more CPUs were added, both for calculations performed internally and for those performed on the cloud servers. It was also observed that internal calculations stagnated at more than four CPUs (i.e., sublinear scaling). On the other hand, close to linear scaling was observed when calculations were run on the cloud servers. Assuming that this linear scaling persists when adding more than eight CPUs and extrapolating from this observation, it is hypothesized that, for larger jobs, the performance of the cloud servers would increase significantly compared with what can be achieved internally.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call