Abstract
Three-dimensional sound-speed distribution is essential for large-scale underwater acoustic applications due to its influence on the signal propagation trajectory. However, it is labor, energy, and time consuming to measure sound speed by traditional methods because of the weak system maneuverability. In this article, an autonomous-underwater-vehicle-assisted underwater sound-speed inversion framework that collaborates ray tracing and artificial intelligence model is proposed to quickly obtain 3-D sound-speed distribution through multicoordinate inversions. An autoencoding-translation neural network is proposed to establish the nonlinear relationship from signal propagation time to the sound-speed profile (SSP), and the inversion time can be shortened with once forward propagation through the model. Robustness could be improved by inverting error-resistant implicit features into the SSP through the proposed translating neural network, whereas the implicit features are extracted from the autoencoder by denoising reconstruction of the input time information. To solve the overfitting problem and extend the training data set, virtual SSPs based on sparse feature points of real SSPs are generated. Simulation results show that our approach can provide a reliable and instantaneous monitoring of 3-D sound-speed distribution.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have