Abstract

Due to the strong speckle noise caused by the seabed reverberation which makes it difficult to extract discriminating and noiseless features of a target, recognition and classification of underwater targets using side-scan sonar (SSS) images is a big challenge. Moreover, unlike classification of optical images which can use a large dataset to train the classifier, classification of SSS images usually has to exploit a very small dataset for training, which may cause classifier overfitting. Compared with traditional feature extraction methods using descriptors—such as Haar, SIFT, and LBP—deep learning-based methods are more powerful in capturing discriminating features. After training on a large optical dataset, e.g., ImageNet, direct fine-tuning method brings improvement to the sonar image classification using a small-size SSS image dataset. However, due to the different statistical characteristics between optical images and sonar images, transfer learning methods—e.g., fine-tuning—lack cross-domain adaptability, and therefore cannot achieve very satisfactory results. In this paper, a multi-domain collaborative transfer learning (MDCTL) method with multi-scale repeated attention mechanism (MSRAM) is proposed for improving the accuracy of underwater sonar image classification. In the MDCTL method, low-level characteristic similarity between SSS images and synthetic aperture radar (SAR) images, and high-level representation similarity between SSS images and optical images are used together to enhance the feature extraction ability of the deep learning model. Using different characteristics of multi-domain data to efficiently capture useful features for the sonar image classification, MDCTL offers a new way for transfer learning. MSRAM is used to effectively combine multi-scale features to make the proposed model pay more attention to the shape details of the target excluding the noise. Experimental results of classification show that, in using multi-domain data sets, the proposed method is more stable with an overall accuracy of 99.21%, bringing an improvement of 4.54% compared with the fine-tuned VGG19. Results given by diverse visualization methods also demonstrate that the method is more powerful in feature representation by using the MDCTL and MSRAM.

Highlights

  • As a main detection approach for many underwater tasks—such as maritime emergency rescue, wreckage salvage, and military defense—side-scan sonar (SSS) can quickly search sizeable areas and obtain continuous two-dimensional images of the marine environment, even in low-visibility water [1,2]

  • Given that edge features and detailed information of the target can be better extracted from the convolutional layers close to the input, we visualized the first convolutional layer response of four models, including unpre-trained VGG19, pre-trained VGG19 based on the ImageNet, VGG19 learned from the synthetic aperture radar (SAR) classification model, and a model with Repeated mechanism (RAM)

  • multi-domain collaborative transfer learning (MDCTL)-multi-scale repeated attention mechanism (MSRAM) proposed proposedininthis thispaper paperprovides providesanan improvement underter target classification in SSSinimages, which iswhich important for underwater applications—such water target classification is important for underwater applicaas emergencyassearch, sea rescue, wreck and military defense—or otherdefense—or unmanned tions—such emergency search, sea recovery, rescue, wreck recovery, and military devices that require target object detection and classification

Read more

Summary

Introduction

As a main detection approach for many underwater tasks—such as maritime emergency rescue, wreckage salvage, and military defense—side-scan sonar (SSS) can quickly search sizeable areas and obtain continuous two-dimensional images of the marine environment, even in low-visibility water [1,2]. With the use of prior knowledge or driven data, model-based methods have been proposed for feature extraction, which needs great consistency and similarities between testing and training datasets. We try to utilize different features of multi-domain images instead of synthesizing data to guide the training of classification model on a limited SSS dataset. Another problem is that the complex characteristics of SSS images—such as blurred edges, strong noise, and various shapes of targets—bring great difficulties in extracting useful features in SSS images. An automatic side-scan sonar image classification method is proposed, which combines the multi-domain collaborative transfer learning (MDCTL) with the multi-scale repeated attention mechanism (MSRAM).

Fine-Tuning
Transfer Learning from Multi-Domain
Architecture
Backbone Network-VGG19
Detailed
Attention Mechanism
Channel Attention Module
Spatial Attention Module
Multi-Scale Repeated Attention Module
Proposed
Dataset Used
Experimental Details
Network Model Evaluating Indicator
Performance Analysis
Methods
Feature Response Map Visualization
Feature
Heat Maps Based on Grad-CAM
14. Experimental
Comparison different transfer learningthe methods usingtransfer
Applications for Detection
Method
Findings
Limitations of the Proposed Method
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.