Abstract

This paper addresses Visual Odometry (VO) estimation in challenging underwater scenarios. Robot visual-based navigation faces several additional difficulties in the underwater context, which severely hinder both its robustness and the possibility for persistent autonomy in underwater mobile robots using visual perception capabilities. In this work, some of the most renown VO and Visual Simultaneous Localization and Mapping (v-SLAM) frameworks are tested on underwater complex environments, assessing the extent to which they are able to perform accurately and reliably on robotic operational mission scenarios. The fundamental issue of precision, reliability and robustness to multiple different operational scenarios, coupled with the rise in predominance of Deep Learning architectures in several Computer Vision application domains, has prompted a great a volume of recent research concerning Deep Learning architectures tailored for visual odometry estimation. In this work, the performance and accuracy of Deep Learning methods on the underwater context is also benchmarked and compared to classical methods. Additionally, an extension of current work is proposed, in the form of a visual-inertial sensor fusion network aimed at correcting visual odometry estimate drift. Anchored on a inertial supervision learning scheme, our network managed to improve upon trajectory estimates, producing both metrically better estimates as well as more visually consistent trajectory shape mimicking.

Highlights

  • Achieving persistent and reliable autonomy for underwater mobile robots in challenging field mission scenarios is a long time quest for the Robotics research community, to which a great amount of research has been devoted to

  • Visual odometry estimation from outdoors imagery is always challenging due to multiple factors that generate blur, shadows and other illumination artifacts which lead to low signal-to-noise ratios in images

  • Multiple renown Visual Odometry (VO), Visual Simultaneous Localization and Mapping (v-Simultaneous Localization and Mapping (SLAM)) and deep learning frameworks that we evaluate on our underwater dataset sequences

Read more

Summary

INTRODUCTION

Achieving persistent and reliable autonomy for underwater mobile robots in challenging field mission scenarios is a long time quest for the Robotics research community, to which a great amount of research has been devoted to. Of large scale datasets with ever increasing variability spanning different scenarios and situational motions, is crucial for further development of deep learning algorithms and for improving upon generalization ability, as that leads to improved robustness when being deployed in large complex environments With this in mind, we can assert that the data used in this work represents a novel and varied underwater focused dataset collected with the UX-1, tailored for visual odometry method implementation and evaluation, with which we pretend to assess performance of state-of-the-art methods for VO estimation and Visual SLAM in different underwater scenarios. It is a real operational mission scenario for the UX-1, which was tasked with exploring and mapping the mine

CAMERA POSE AND MOTION ESTIMATION
RESULTS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call