Abstract

The E&P subsurface sector today faces the lowest reserve replacement ratios in decades as well as an increasing need to maximize recoverable oil and gas from developed fields. Many of the challenges in these areas stem from legacy systems that silo data and information critical for investment decisions. To turn the tide, this sector should call on a reliable, though perhaps under-recognized ally: digital technology driven by open, accessible data. Whether in exploration, field development, or drilling, digital technology powered by open data can help the upstream industry access and contextualize big data and scalable computing power to make trustworthy time- and cost-saving decisions. With the right technology and software, the E&P subsurface sector can realistically avoid repetitive tasks, remove human biases, reduce processing time, enhance collaboration, and empower workers to become innovators. Liberated data take out the guess-work and make E&P subsurface operations more effective, efficient, and better for all. Minimizing Uncertainty Though some software products claim to minimize uncertainty in E&P subsurface decision-making processes, a degree of uncertainty always exists. Decisions should therefore be based on the widest-possible range of information. A study of 97 wells drilled from 2003-2013 in the UK sector of the North Sea found that more than 50% failed due to poorly integrated data and insights, improperly applied domain science, and a lack of context and effective peer review. Digital products and software that run on liberated, contextualized data could have rectified this, putting subsurface information to use across stakeholders, enabling data-driven decision-making and maximizing the wells’ uptime and delivery. The largest independent company in the Norwegian Continental Shelf is democratizing access to all subsurface and drilling data across all its teams. Data and information are shared and accessible via a map-based polygon search. In this way, the company shares the same context, which is the most complete one, while each expert can enrich it, thereby generating more domain-specific insights. Reliability of Data Quality Data and information should always be auditable. If not, subsurface interpretations and models have a high risk of being incorrect, leading to decisions usually based on overestimated reserves calculations which drive up investments and costs. By contrast, liberated data - with enriched-quality tags referencing source systems, users, and history - can provide the foundation for best practices in automated enterprise data governance. Users will always know which datasets are validated and able to run their digital workflows. It is proven in many implementation projects that when data are liberated and ingested in cloud-based environments, many issues with data quality are identified. By contrast, it would be impossible to detect them were data stored in a legacy system. Once data are liberated and accessible to everyone, the cloud technology also enables hosting environments to create, deploy, and publish data management “functions” that can provide continuous semi- or fully automated data QC and standardization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call