Abstract

Sidescan sonars are increasingly used in underwater search and rescue for drowning victims, wrecks and airplanes. Automatic object classification or detection methods can help a lot in case of long searches, where sonar operators may feel exhausted and therefore miss the possible object. However, most of the existing underwater object detection methods for sidescan sonar images are aimed at detecting mine-like objects, ignoring the classification of civilian objects, mainly due to lack of dataset. So, in this study, we focus on the multi-class classification of drowning victim, wreck, airplane, mine and seafloor in sonar images. Firstly, through a long-term accumulation, we built a real sidescan sonar image dataset named SeabedObjects-KLSG, which currently contains 385 wreck, 36 drowning victim, 62 airplane, 129 mine and 578 seafloor images. Secondly, considering the real dataset is imbalanced, we proposed a semisynthetic data generation method for producing sonar images of airplanes and drowning victims, which uses optical images as input, and combines image segmentation with intensity distribution simulation of different regions. Finally, we demonstrate that by transferring a pre-trained deep convolutional neural network (CNN), e.g. VGG19, and fine-tuning the deep CNN using 70% of the real dataset and the semisynthetic data for training, the overall accuracy on the remaining 30% of the real dataset can be eventually improved to 97.76%, which is the highest among all the methods. Our work indicates that the combination of semisynthetic data generation and deep transfer learning is an effective way to improve the accuracy of underwater object classification.

Highlights

  • Sidescan sonars can provide high resolution images of the seabed even in zero-visibility water, which makes it very useful in a variety of military and civilian applications such as mine-countermeasures, ocean mapping, offshore oil prospecting, and underwater search and rescue [1]–[3]

  • EXPERIMENT SETTINGS To demonstrate the effectiveness of the proposed method using both deep transfer learning and semisynthetic training data, the experimental results of the proposed method will be compared with the method using BOF descriptor [56] on SIFT features [57] and support vector machines (SVMs) classification, the method using a shallow convolutional neural network (CNN) trained from scratch [40], the gcForest method using Deep Forest [58], the method using deep learning of small datasets [59]

  • All the methods used for comparison have been demonstrated to be effective on small-scale training data: Before the rise of deep learning, the SIFT-based descriptors usually perform best among various local descriptors [60], and SVM can work well with only small samples [61]; the gcForest method is highly competitive to deep neural networks and can work well even when there are only small-scale training data; the method in [59] transfers all the layers of a pretrained VGG16 network except the last two full connect layers, add a new fully connected layer, and fine-tunes it to achieve good performance on small datasets

Read more

Summary

Introduction

Sidescan sonars can provide high resolution images of the seabed even in zero-visibility water, which makes it very useful in a variety of military and civilian applications such as mine-countermeasures, ocean mapping, offshore oil prospecting, and underwater search and rescue [1]–[3]. 2) Considering the real dataset is imbalanced, we proposed a semisynthetic data generation method for producing sonar images of airplanes and drowning victims, which uses optical images as input, and combines image segmentation with intensity distribution simulation of different regions.

Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.