Abstract

There have been numerous efforts exploring the application of machine learning (ML) techniques for field-scale automated interpretation for well log data. A critical prerequisite for automatic interpretation via computational means is to ensure that the log characteristics are reasonably consistent across multiple wells. Manually correcting logs for consistency is laborious, subjective, and error prone. For some logs, such as gamma ray and neutron porosity, systematic inconsistencies or errors can be caused by borehole effects as well as miscalibration. Biased or consistently inaccurate data in the logs can confound ML approaches into learning erroneous relationships, which leads to inaccurate lithology prediction, reservoir estimation, and incorrect formation markers, etc. To overcome such difficulties, we have developed a deep learning method to provide petrophysicists with a set of consistent logs through an automated workflow. Presently, the corrections we target are systematic shifts or errors on the common logs, especially gamma ray and neutron logs, and to a lesser extent, local errors due to washouts. This workflow can be separated into two steps. The first step represents a semiautomated approach for selecting wells to be used as training and validation; this approach employs statistical analysis to detect and segregate wells with similar log distributions. The second step is the core process of this workflow. It samples intervals across multiple logs identified by the first step and trains a convolutional neural network (CNN) with a U-Net architecture to identify and correct systematic errors such as shifts, gains, random noises, and small local disturbances. The training process is self-supervised and does not require any human labels. This self-supervised deep learning methodology is capable of automatically discovering unique implicit features and contextually applying the relevant log correction. The proposed method has been applied to multiple oil fields around the world. Field tests were successfully conducted in two scenarios. The first scenario aims to correct for synthetic noise and artifacts added to field data when triple-combo logs (gamma ray, density, neutron, and resistivity logs) were available—in this scenario the tests were targeted to correct systematic errors to the gamma ray logs or to both gamma ray and neutron porosity logs simultaneously. The second scenario aims to correct for original field noise in gamma ray logs and neutron porosity logs when quad-combo logs (gamma ray, density, neutron, resistivity, and compressional slowness logs) are available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call