Abstract

Understanding sound propagation in a shallow ocean environment is complicated but important for naval technologies. Some researchers are currently applying a variety of supervised deep learning methods, such as convolution neural networks, for either source localization or seabed classification. One of the costs of supervised learning is the requirement for large amounts of labeled data. In the present study, unlabeled data are used to train a Transformer model. Transformer-based models have demonstrated ability for predicting sequential data; examples are Google’s BERT and OpenAI’s GPT-3. In our technique, we train a transformer-based model in a twopart process. First, self-supervised learning is implemented using synthetic ship spectrograms for various shallow ocean environments. The model is trained as an encoder/decoder to perform sequence-to-sequence prediction. Second, the transformer model is fine-tuned to predict the source location and seabed class from a small set of labeled synthetic samples. Data samples measured during the Seabed Characterization Experiment 2017 are utilized as a testing dataset. The advantages of this approach are the ability to train a model with a larger variety of data, including the use of unlabeled data and data of variable input length.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.