Abstract

The probability distribution of three-dimensional sound speed fields (3D SSFs) in an ocean region encapsulates vital information about their variations, serving as valuable data-driven priors for SSF inversion tasks. However, learning such a distribution is challenging due to the high dimensionality and complexity of 3D SSFs. To tackle this challenge, we propose employing the diffusion model, a cutting-edge deep generative model that has showcased remarkable performance in diverse domains, including image and audio processing. Nonetheless, applying this approach to 3D ocean SSFs encounters two primary hurdles. First, the lack of publicly available well-crafted 3D SSF datasets impedes training and evaluation. Second, 3D SSF data consist of multiple 2D layers with varying variances, which can lead to uneven denoising during the reverse process. To surmount these obstacles, we introduce a novel 3D SSF dataset called 3DSSF, specifically designed for training and evaluating deep generative models. In addition, we devise a high-capacity neural architecture for the diffusion model to effectively handle variations in 3D sound speeds. Furthermore, we employ state-of-the-art continuous-time-based optimization method and predictor-corrector scheme for high-performance training and sampling. Notably, this paper presents the first evaluation of the diffusion model's effectiveness in generating 3D SSF data. Numerical experiments validate the proposed method's strong ability to learn the underlying data distribution of 3D SSFs, and highlight its effectiveness in assisting SSF inversion tasks and subsequently characterizing the transmission loss of underwater acoustics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call