Abstract
We present a visual based approach for reactive autonomous navigation of an underwater vehicle. In particular, we are interested in the exploration and continuous monitoring of coral reefs in order to diagnose disease or physical damage. An autonomous underwater vehicle needs to decide in real time the best route while avoiding collisions with fragile marine life and structure. We have opted to use only visual information as input. We have improved the Simple Linear Iterative Cluster algorithm which, together with a simple nearest neighbor classifier, robustly segment and classify objects from water in a fast and efficient way, even in poor visibility conditions. From the resulting classification and the current robot's direction and orientation, the next possible free-collision route can be estimated. This is achieved by grouping together neighboring water superpixels (considered as regions of interest). Finally, we use a model-free robust control scheme that allows the robot to autonomously navigate through the free-collision routes obtained in the first step. The experimental results, both in simulations and in practice, show the effectiveness of the proposed navigation system. In this paper, an autonomous reactive navigation approach based on visual information for an underwater vehicle is presented. Our method is divided in two stages. In the first stage, we use our improved version of the Simple Linear Iterative Clustering (SLIC) superpixel algorithm (4) to robustly segment low-resolution images based on their color features. We exploit the fact that high-resolution information is not needed for navigation (see (5), (6) for studies on the subject) and also that for underwater images, the CIELab color space can be adjusted to highlight only the color on objects that do not represent water itself. Given that this algorithm has a linear computational complexity, the objects and RoI (water) can be segmented in a fast and efficient way. After this, a simple nearest neighbor classifier is applied to separate and detect objects from water. In the second stage, from the resulted classification and the current direction and orientation of the robot, the next possible free-collision route (also called direction of escape) can be estimated. This is achieved by obtaining the geometric center of the area covered by a RoI. We have previously used this approach for indoor mobile robot navigation with promising results (7). Finally, we present a model-free robust control scheme that allows the robot to autonomously navigate by mapping the image error (in terms of the geometric center of RoI) to robot motion directly. The experimental results show the effectiveness of the proposed navigation system.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.