Abstract

Oil and gas exploration is one of the riskiest and most expensive of all commercial endeavors. Exploring for these oil and gas reservoirs is done by generating enormous amounts of seismic data and running that data through the computational ringer to reveal the most promising locations to drill. Data sets in this domain are among the largest in the industry, reaching into multiple petabytes, and which must be funneled from the storage subsystem to the compute nodes at tens of gigabytes per second. “Cloud Computing and Big Data” technologies describe a new generation of technologies and architectures designed to economically extract value from very large volumes of a wide variety of data (structured and unstructured) by enabling high-velocity capture, discovery as well as analysis. These technologies have made remarkable success in some traditional business areas. However, in the field of seismic exploration, the potential of “Cloud Computing and Big Data” technology has not yet been fully emancipated. There are many challenges. Data are locked in applications and cannot be shared efficiently. Data sets of this size compel the storage resources and network bandwidth. The processing workloads are a great challenge to the computing infrastructure and costs. Jobs may be random or sequential or both in an unpredictable pattern. Automatic or semi-automatic extraction of useful information for interpretation and visualization of seismic images is still unpractical. Furthermore, the workflow of oil exploration has formed into a processing chain, i.e. the output from one job becomes the input for another, covering from acquisition, processing to interpretation and visualization, which means that a software ecosystem is needed so that the newly acquired seismic data could be processed in real time to cope with the time demands and the workflow flexibility. To address those topics, this paper introduces the methodology and basic features of “Cloud Computing and Big Data”, analyzes the relationship between cloud computing, big data and Internet of Things, Internet + as well as other key technologies. Several existing Cloud Computing and Big Data systems are introduced. Furthermore, the technical characteristics of seismic acquisition, processing and interpretation are analyzed. The presented evidence shows that the integration of acquisition, processing, interpretation and visualization has been the trend of the seismic exploration. The paper then discusses the influence of “Cloud Computing and Big Data” on the evolution of seismic technology in three aspects. MapReduce framework as well as some success applications is presented to verify that Cloud Computing could handle the processing of tremendous huge and complex seismic data. Machine learning based on cloud computing is becoming a prosperous solution for seismic analysis to automatically extract more value from data. And the combination of Internet Things and Big Data is a good choice for seismic acquisition. Finally, the paper points out that building a software ecosystem of seismic processing and improving the technologies of seismic acquisition based on internet of things is the right choice to fully realize digital seismic exploration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call