Abstract

Fault detection is a crucial task in seismic structure interpretation. Convolutional neural network (CNN)-based methods, in general, require large amount of labeled data for network training. One way to build the labeled data is to create synthetic seismic images with corresponding fault labels. However, it is hard to ensure that the synthetic data have the same fault feature distributions as the field data, which may lead to inaccurate and unreliable prediction results. Another way is to manually label the faults, which is time-consuming and subjective. In this letter, we propose that using knowledge distillation (KD) to improve the performance of fault detection by integrating the features from large number of synthetic samples and a small number of field samples. We distill knowledge from an ensemble of two teacher CNNs to train a student CNN (applied to final target) for seismic fault detection. In our work, one segmentation teacher CNN is trained on synthetic samples with known ground truth fault labels and another classification teacher CNN is trained on field samples with manually picked labels. Then, a classification student network is trained on samples generated by voting the results from two teacher models. The student CNN learns not only the general fault characteristics in the synthetic data but also the specific fault features of the target field data. Test on the field data shows that the student CNN highlights seismic fault more accurately with higher resolution than the teacher CNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call