Abstract

This paper proposes a method to perform on-line multi-class segmentation of Side-Scan Sonar acoustic images, thus being able to build a semantic map of the sea bottom usable to search loop candidates in a SLAM context. The proposal follows three main steps. First, the sonar data is pre-processed by means of acoustics based models. Second, the data is segmented thanks to a lightweight Convolutional Neural Network which is fed with acoustic swaths gathered within a temporal window. Third, the segmented swaths are fused into a consistent segmented image. The experiments, performed with real data gathered in coastal areas of Mallorca (Spain), explore all the possible configurations and show the validity of our proposal both in terms of segmentation quality, with per-class precisions and recalls surpassing the 90%, and in terms of computational speed, requiring less than a 7% of CPU time on a standard laptop computer. The fully documented source code, and some trained models and datasets are provided as part of this study.

Highlights

  • Even though cameras are gaining popularity in underwater robotics, computer vision still presents some problems in these scenarios [1]

  • This paper proposes a method to perform on-line multi-class segmentation of Side-Scan Sonar acoustic images, being able to build a semantic map of the sea bottom usable to search loop candidates in a Simultaneous Localization and Mapping (SLAM) context

  • The Autonomous Underwater Vehicle (AUV) was equipped with a Doppler Velocity Log (DVL) sensor, providing instantaneous speed information as well as precise altitude and heading measurements

Read more

Summary

Introduction

Even though cameras are gaining popularity in underwater robotics, computer vision still presents some problems in these scenarios [1]. Underwater vision is usually constrained to missions in which the Autonomous Underwater Vehicle (AUV) can navigate close to the sea bottom to properly observe it [2,3]. Acoustic sensors or sonars [4] are well suited for subsea environments because of their large sensing range, and because they are not influenced by the illumination conditions and they can operate in a wider range of scenarios. That is why sonar is still the modality of choice in underwater robotics, being used as the main exteroceptive sensor [7] or combined with cameras for close range navigation

Objectives
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.