This work presents the development and field testing of a novel adaptive visual information gathering (AVIG) framework for autonomous exploration of benthic environments using AUVs. The objective is to adapt dynamically the robot exploration using the visual information gathered online . This framework is based on a novel decision-time adaptive replanning (DAR) behavior that works together with a sparse Gaussian process (SGP) for environmental modeling and a Convolutional Neural Network (CNN) for semantic image segmentation. The framework is executed in mission time. The SGP uses semantic data obtained from stereo images to probabilistically model the spatial distribution of certain species of seagrass that colonize the sea bottom forming widespread meadows. The uncertainty of the probabilistic model provides a measure of sampling informativeness to the DAR behavior. The DAR behavior has been designed to execute successive informative paths, without stopping, considering the newest information obtained from the SGP. We solve the information path planning (IPP) problem by means of a novel depth-first (DF) version of the Monte Carlo tree search (MCTS). The DF-MCTS method has been designed to explore the state-space in a depth-first fashion, provide solution paths of a given length in an anytime manner, and reward smooth paths for field realization with non-holonomic robots. The complete framework has been integrated in a ROS environment as a high level layer of the AUV software architecture. A set of simulations and field testing show the effectiveness of the framework to gather data in P. oceanica environments.
Read full abstract