Abstract
Abstract. Feature extraction plays a crucial role in visual localization, SLAM (Simultaneous Localization and Mapping) and autonomous navigation, by enabling the extraction and tracking of distinctive visual features for both mapping and localization tasks. However, most of the studies, investigate the efficiency and performance of the algorithms in urban, vegetated or indoor environments and not in unstructured environments which suffers by poor information in visual cues where a feature extraction algorithm or architecture could base on. In this study, an investigation of SuperPoint architecture’s efficiency in keypoint detection and description applied to unstructured and planetary scenes was conducted, producing three different models: (a) an original SuperPoint model trained from scratch, (b) an original fine-tuned SuperPoint model, (c) an optimized SuperPoint model, trained from scratch with the same parametarization as the corresponding original model. For the training process, a dataset of 48 000 images was utilized representing unstructured scenes from Earth, Moon and Mars while a benchmark dataset was used aiming to evaluate the model in illumination and viewpoint changes. The experimentation proved that the optimized SuperPoint model provides superior performance using repeatability and homography estimation metrics, compared with the original SuperPoint models, and handcrafted keypoint detectors and descriptors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.