Abstract

Some underwater applications involve deploying multiple underwater Remotely Operated Vehicles in a common area. Such applications require the localization of these vehicles, not only with respect to each other but also with respect to a previously unknown environment. To this end, this work presents MAM3SLAM, a new fully centralized multi-agent and multi-map monocular Visual Simultaneous Localization And Mapping (VSLAM) framework. Multi-agent evaluation metrics are introduced to provide an extensive evaluation of MAM3SLAM compared to the state-of-the-art multi-agent VSLAM on four two-agent scenarios: one standard airborne dataset and three new underwater datasets recorded in a pool and the sea. The results show that MAM3SLAM is robust to underwater visual conditions and tracking failures, outperforms the other evaluated methods in estimating the individual and relative poses of the agents and in collaborative mapping accuracy. MAM3SLAM successfully estimates the individual and relative localization of the agents with an error lower than 5 cm on three out of the four test sequences, and is twice as accurate as competing multi-agent works in challenging visual conditions with frequent visual dropouts, poor textures, low framerate and fast motion. MAM3SLAM’s source code is made available, as are the underwater datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call