Abstract

The random noise attenuation is an essential step in seismic data processing. Due to complex geological conditions and acquisition environment, the intensity of effective signal and random noise varies in time and space. Additionally, the morphology of seismic event is complex and diverse, such as large dips and fast changes. These complex conditions necessitate that the denoiser adjust filtering policy dynamically. In this paper, we propose a reinforcement learning-based seismic denoising (RLSD) model with the framework of asynchronous advantage actor-critic (A3C). In A3C framework, the RLSD agent utilizes a policy network to learn a denoising policy for the state that is the sample of seismic data and selects a suitable filter from the preset action space composed of multiple simple and effective seismic filters with different parameters. Moreover, the RLSD agent utilizes a value network and a region-adaptive weighted reward function to accurately evaluate the denoising effect of nonstationary seismic signals. A curriculum learning approach is adopted to achieve convergence of the proposed RLSD model under complex seismic data by training from the stationary training data to nonstationary training data, and make the model more suitable to the data to be processed by using a local similarity-based reward function to fine-tune the model. The synthetic and field seismic data applications confirm that the proposed RLSD model achieves a significant performance in preserving nonstationary signals and suppressing noise by adaptively adjusting the denoising policy according to the complex structural features and noise levels. The source code is available on https://github.com/liangc-code/RLSD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call