Underwater Acoustic Target Recognition (UATR) is critical to maritime traffic management and ocean monitoring. However, underwater acoustic analysis is fraught with difficulties. The underwater environment is highly complex, with ambient noise, variable water conditions (such as temperature and salinity), and multi-path propagation of acoustic signals. These factors make it challenging to accurately acquire and analyze target features. Traditional UATR methods struggle with feature fusion representations and model generalization. This study introduces a novel high-dimensional feature fusion method, CM3F, grounded in signal analysis and brain-like features, and integrates it with the Boundary-Aware Hybrid Transformer Network (BAHTNet), a deep-learning architecture tailored for UATR. BAHTNet comprises CBCARM and XCAT modules, leveraging a Kan network for classification and a large-margin aware focal (LMF) loss function for predictive losses. Experimental results on real-world datasets demonstrate the model’s robust generalization capabilities, achieving 99.8% accuracy on the ShipsEar dataset and 94.57% accuracy on the Deepship dataset. These findings underscore the potential of BAHTNet to significantly improve UATR performance.
Read full abstract