Abstract

Underwater acoustic target recognition remains a formidable challenge in underwater acoustic signal processing. Current target recognition approaches within underwater acoustic frameworks predominantly rely on acoustic image target recognition models. However, this method grapples with two primary setbacks; the pronounced frequency similarity within acoustic images often leads to the loss of critical target data during the feature extraction phase, and the inherent data imbalance within the underwater acoustic target dataset predisposes models to overfitting. In response to these challenges, this research introduces an underwater acoustic target recognition model named Attention Mechanism Residual Concatenate Network (ARescat). This model integrates residual concatenate networks combined with Squeeze-Excitation (SE) attention mechanisms. The entire process culminates with joint supervision employing Focal Loss for precise feature classification. In our study, we conducted recognition experiments using the ShipsEar database and compared the performance of the ARescat model with the classic ResNet18 model under identical feature extraction conditions. The findings reveal that the ARescat model, with a similar quantity of model parameters as ResNet18, achieves a 2.8% higher recognition accuracy, reaching an impressive 95.8%. This enhancement is particularly notable when comparing various models and feature extraction methods, underscoring the ARescat model’s superior proficiency in underwater acoustic target recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call