Abstract

Applying passive sonar to classify underwater acoustic targets at different depths is a challenging task. Although the self-contained hydrophone array can ensure the normal operation of most units in various environments, it is arduous to achieve precise time synchronization between each hydrophone, which results in difficulties in data fusion between hydrophones. For a vertical sonar array composed of self-contained units, a deep learning-based data compression and multihydrophone fusion (DCMF) model is proposed to quickly extract acoustic propagation interference features, which are used for underwater acoustic target classification. Unlike the frequency-range domain striation features acquired by long-term accumulation, this paper exploits the depth difference between multiple hydrophones to obtain the frequency-depth domain joint striation features in a short time. The proposed DCMF conducts efficient feature compression and fusion via parallel stacked sparse autoencoders and a multi-input fusion network. The experimental results illustrate that the compressed features have strong robustness, a low mean square error with the simulation results, and shorter signal length requirements, which improves the classification efficiency and real-time performance of DCMF. In the case of the experimental dataset, DCMF is compared with several state-of-the-art multiscale fusion models, and the experiments indicate that DCMF has the best performance and smallest computational complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call