Abstract

This paper presents a vision-based navigation system for an autonomous underwater vehicle in semistructured environments with poor visibility. In terrestrial and aerial applications, the use of visual systems mounted in robotic platforms as a control sensor feedback is commonplace. However, robotic vision-based tasks for underwater applications are still not widely considered as the images captured in this type of environments tend to be blurred and/or color depleted. To tackle this problem, we have adapted thelαβcolor space to identify features of interest in underwater images even in extreme visibility conditions. To guarantee the stability of the vehicle at all times, a model-free robust control is used. We have validated the performance of our visual navigation system in real environments showing the feasibility of our approach.

Highlights

  • The development of research in autonomous underwater vehicles (AUVs) began approximately four decades ago

  • A visual-based controller to guide the navigation of an AUV in a semistructured environment using artificial marks was presented

  • The main objective of this work is to provide to an aquatic robot the capability of moving in an environment when visibility conditions are far from ideal and artificial landmarks are placed with an approximately known distribution

Read more

Summary

Introduction

The development of research in autonomous underwater vehicles (AUVs) began approximately four decades ago. The localization and navigation problems represent a challenge in the AUVs development due to the unstructured and hazardous conditions of the environment and the complexity of determining the global position of the vehicle. An extensive review of the research related to this topic is presented in [1,2,3,4]. Sensor systems play a relevant role in the development of AUV navigation systems as they provide information about the system status and/or environmental conditions. Global or local pose estimation of underwater vehicles is still an open problem, specially when a single sensor is used. Underwater vehicles use multisensor systems with the intention of estimating their position and determining the location of objects in their workspace. Data fusion is needed to increase the accuracy of the pose estimation (for a review of sensor fusion techniques see [9].)

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call