Abstract
This paper presents an end-to-end Neural Network (NN) to estimate the overlap between two scenes observed by an underwater robot endowed with a bottom-looking camera. This information is extremely valuable to perform visual loop detection in Simultaneous Localization and Mapping (SLAM). Contrarily to other existing approaches, this study does not depend on handcrafted features or similarity metrics, but jointly optimizes the image description and the loop detection by means of a Siamese NN architecture. Twelve different configurations have been experimentally tested using large balanced datasets synthetically generated from real data. These experiments demonstrate the ability of our proposal to properly estimate the overlap with precisions, recalls and fall-outs close to 95%, 98% and 5% respectively and execution times close to 0.7 ms per loop in a standard laptop computer. The source code of this proposal is publicly available.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.