Abstract

Distributed acoustic sensing (DAS) is an emerging seismic acquisition technique with great practical potential. However, various types of noise seriously corrupt DAS signals, making it difficult to recover signals, particularly in low signal-to-noise ratio (S/N) regions. Existing deep-learning methods address this challenge by augmenting data sets or strengthening the complex architecture, which can cause overdenoising and a computational power burden. Hence, the heterogeneous knowledge distillation (HKD) method is developed to more efficiently address the signal reconstruction under low S/N. HKD uses ResNet 20 as the teacher and student (T-S) model. It uses residual learning and skip connections to facilitate feature representation at deeper levels. The main contribution is the training of the T-S framework with different noise levels. The teacher model that is trained using slightly noisy data serves as a powerful feature extractor to capture more accurate signal features because high-quality data are easy to recover. By minimizing the difference between the outputs of T-S models, the student model that is trained using severely noisy data can distill the absent signal features from the teacher to improve its own signal recovery, which enables heterogeneous feature distillation. Furthermore, simultaneous positive and negative learning (P&NL) is developed to extract more useful features from the teacher, enabling the T-S framework to learn from the predicted signal and noise during training. Consequently, a new loss function that combines student denoising loss and HKD loss weighted by P&NL is developed to alleviate signal leakage. The experimental results demonstrate that the HKD achieves distinct and consistent signal recovery without increasing computational costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call