Abstract

ABSTRACT Seismic waveform data recorded at stations can be thought of as a superposition of the signal from a source of interest and noise from other sources. Frequency-based filtering methods for waveform denoising do not result in desired outcomes when the targeted signal and noise occupy similar frequency bands. Recently, denoising techniques based on deep-learning convolutional neural networks (CNNs), in which a recorded waveform is decomposed into signal and noise components, have led to improved results. These CNN methods, which use short-time Fourier transform representations of the time series, provide signal and noise masks for the input waveform. These masks are used to create denoised signal and designaled noise waveforms, respectively. However, advancements in the field of image denoising have shown the benefits of incorporating discrete wavelet transforms (DWTs) into CNN architectures to create multilevel wavelet CNN (MWCNN) models. The MWCNN model preserves the details of the input due to the good time–frequency localization of the DWT. Here, we use a data set of over 382,000 constructed seismograms recorded by the University of Utah Seismograph Stations network to compare the performance of CNN and MWCNN-based denoising models. Evaluation of both models on constructed test data shows that the MWCNN model outperforms the CNN model in the ability to recover the ground-truth signal component in terms of both waveform similarity and preservation of amplitude information. Model evaluation of real-world data shows that both the CNN and MWCNN models outperform standard band-pass filtering (BPF; average improvement in signal-to-noise ratio of 9.6 and 19.7 dB, respectively, with respect to BPF). Evaluation of continuous data suggests the MWCNN denoiser can improve both signal detection capabilities and phase arrival time estimates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call