Abstract

_ This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 211093, “Development of an Artificial-Intelligence-Based Well-Integrity Monitoring Solution,” by Prihandono Aditama, SPE, Tina Koziol, and Meindert Dillen, Wintershall Dea. The paper has not been peer reviewed. _ This paper presents the proof of concept (PoC) of artificial intelligence (AI)-based well-integrity monitoring for gas-lift, natural-flow, and water-injector wells. AI-model prototypes were built to detect annulus leakage as incident-relevant anomalies from time-series sensor data. The AI models for gas-lift and natural-flow wells achieved a sufficient level of performance, with a minimum of 75% of historical events detected and less than one false positive per month per well. Problem Statement In the exploration and production industry, historical well-integrity events are rare because systems are designed as robustly as possible to prevent incidents. For the authors’ study, there were only 12 historical wellbore leakage incidents spanning 13 years of well operations in the assets considered. The events were treated separately for each well type. This challenge is normally known as anomaly detection and has been studied in various fields where time-series data are used. For a historical incident, two types of anomalies were identified: - Short-term incident relevant anomaly (SIRA), an anomaly occurring within 1 day - Long-term incident relevant anomaly (LIRA), an anomaly lasting for more than 1 day Objective and Success Criteria The primary objective of the PoC was to validate the hypothesis that incident-relevant anomalies from historical well-sensor data can be predicted and monitored by using adequate AI models. A generalized AI model was built so that it would detect a well-annulus leakage in future wells. Success criteria were defined by two approaches: a human-centric approach and a statistical approach. For the human-centric approach, the future users were interviewed to obtain insights about their requirements for tool performance. The feedback was mapped against a confusion matrix. Users preferred to optimize the number of correctly predicted historical incidents (true positives) while achieving an acceptable number of false positives. In general, wrong predictions were seen as acceptable (false positives) to a certain degree, depending on the number of wells under a user’s responsibility. Based on those insights, the AI model was optimized for recall instead of precision. Recall (sometimes known as sensitivity) is defined as the number of true positives divided by the total of true positives and false negatives and measures the miss rate. The tradeoff for choosing this approach means the AI model will report some false alarms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.