Abstract

Deep-learning (DL) techniques have been proposed to solve geophysical seismic facies classification problems without introducing the subjectivity of human interpreters’ decisions. However, such DL algorithms are “black boxes” by nature, and the underlying basis can be hardly interpreted. Subjectivity is therefore often introduced during the quality control process, and any interpretation of DL models can become an important source of information. To provide a such degree of interpretation and retain a higher level of human intervention, the development and application of explainable DL methods have been explored. To showcase the usefulness of such methods in the field of geoscience, we utilize a prototype-based neural network (NN) for the seismic facies classification problem. The “prototype” vectors, jointly learned to have the stereotypical qualities of a certain label, form a set of representative samples. The interpretable component thereby transforms “black boxes” into “gray boxes.” We demonstrate how prototypes can be used to explain NN methods by directly inspecting key functional components. We describe substantial explanations in three ways of examining: 1) prototypes’ corresponding input–output pairs; 2) the values generated at the specific explainable layer; and 3) the numerical structure of specific shallow layers located between the interpretable latent prototype layer and an output layer. Most importantly, the series of interpretations shows how geophysical knowledge can be used to understand the actual function of the seismic facies classifier and therefore help the DL’s quality control process. The method is applicable to many geoscientific classification problems when in-depth interpretations of NN classifiers are required.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call