Abstract

Distributed acoustic sensing (DAS) is a new, rapidly developing acquisition technology. It has many advantages, such as full well coverage, high sampling density, and strong tolerance compared with conventional geophones in harsh environments. However, DAS data contain various new types of noise with different characteristics, which brings great difficulties to weak signal recovery. It is also not conducive to subsequent inversion, imaging, and interpretation. Several traditional methods and their developments can only remove particular types of DAS noise with serious energy loss. At the same time, the denoising performance is strongly dependent on the threshold function or manual setting parameters. In recent years, deep learning has attracted much attention in DAS data processing. However, convolutional neural networks can only capture local information. The commonly used channel attention assigns weights to each channel to reflect the importance and neglects the features within the channels. The popular self-attention mechanism (SAM) can efficiently obtain the dependencies between different locations or different patches that are a long distance from the input. Thus, a novel subspace projection attention network (SPANet) is designed from a new projection perspective. In the novel network, some feature basis vectors are generated to guide the decoupled projection of the noise components from the signal components. Low-level surface features are projected under the guidance of high-level abstract features through SAM before feature fusion. The projection operation helps to find the optimal subspace that preserves the fine structure of the input as much as possible, better capturing the potential relevance in DAS data. Finally, experimental results of synthetic and field data illustrate that SPANet recovers the effective reflection signals well, whether in a shallow layer with strong noise or a deep layer with weak energy. Its superior performance compared with some traditional and network methods is fully verified and analyzed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.