Abstract

Abstract An explainable deep learning artificial intelligence (AI) model has been designed for instantaneous identification of the top of cement (TOC). Identifying TOC is a critical part for evaluating a well cementing operation and detecting unexpected fluid loss, cement contamination, hole enlargement, etc. The model's results are compared with TOC determined by human interpreter via manual review of acoustic waveform data - a time and resource-intensive interpretation technique, highly dependent on individual expertise. In the AI model described, attention layers are applied to the neural network architecture. When the model outputs the TOC, these layers highlight the zone of interest on the waveform data. This zone indicates the area given higher weight by the model when making its interpretation; it is in effect, the "explanation" given by the model. Thus, the model simultaneously supplies the answer and indicates how it arrived at the answer. Reliability of results can be verified by comparing the zone of interest chosen manually by experts versus the model's selection. Experiments have confirmed a good match between the two. The AI model was tested on 33 cementing job data collected from field. 26 jobs were used for training and 7 for testing. On the 7 testing jobs, experiment showed an average difference of just 4.7 ft between the TOC manually determined by human interpreters and the model's estimate. It outperforms the neural network architectures without applying attention layers, like CNN and UNET, proving the effectiveness of the attention layers in improving AI model accuracy. The experiment also demonstrated the increased efficiency and reduced cost of TOC identification achieved with the new technique, as it takes at maximum only 3 seconds for determining the TOC for each job. In addition, the zone of interest selected by the model highlights the range where the acoustic waveform changes abruptly along the depth dimension in each job, corresponding to the human interpreters’ cognitive process when determining the TOC. As reported by experts, the showing of the zone of interest makes the model and the results trustworthy, as the "explanation" is reasonable. The proposed model shed light on making industrial deep learning neural networks explainable. Although neural networks were proposed decades ago, because of their black box nature their implementation is limited in the energy industry, where poor engineering or commercial decisions can have severe financial or HSE consequences. Explainable AI models enable users to understand how AI makes interpretations, whether results are reliable, and when they should take manual control of the AI system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call