Abstract

In underwater acoustic target recognition, wave-based methods and time–frequency (T-F) representation-based methods are typically used to identify the target from different perspectives. Both methods have advantages and disadvantages. In this study, a complementary space between the wave-based model and the T-F representation-based model was proven to exist. Advanced lightweight technologies make the fusion of the two types of models technically possible. First, a lightweight multiscale residual deep neural network (MSRDN) is designed using lightweight network design techniques, in which 64.18% of parameters and 79.45% of floating point operations (FLOPs) are reduced from the original MSRDN with a small loss of accuracy. Then, a joint model combining wave and T-F representation-based models was presented. An effective synchronous deep mutual learning method that saves approximately 11.54%–16.27% training time is proposed to train the joint model. Two datasets acquired from real-world scenarios were used to verify the effectiveness of the proposed method. Compared with state-of-the-art methods, the joint model with synchronous deep mutual learning achieved the best recognition accuracies of 85.20% and 79.50% in the two datasets, respectively. The results of ablation explorations prove that the performance improvements of the proposed methods benefit from the deep mutual learning of the two branches and are not unique to certain models. Finally, a discussion reveals the essential mechanism of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.