Abstract

Loop closure detection is the core part of the visual simultaneous localization and mapping (VSLAM) for autonomous robots. In the dynamic environments, loop closure detection turns to be a very difficult problem compared to the static scenario. This paper proposes a novel approach based on convolutional autoencoder neural network (CAENN) architecture for extracting images features, and then uses Euclidean loss for minimizing the difference between the extracted feature and the gist feature which has the ability of scene recognition. In order to improve the accuracy and recall rate for loop closure detection in the dynamic scenario, the perspective transformation and dynamic object are applied in the process of the construction of the training set. And by calculating the Manhattan distance between the two image feature vectors, the loop closure is accepted when the distance is smaller than the threshold. The experimental results demonstrate that the proposed method obtains better accuracy and recall rate compared to the commonly used gist feature method and has a lower cost on time and space compared to the BOW method and AlexNet method in loop closure detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.