Abstract

A robust visual location estimation scheme in large-scale complex scenes is critical for location-relevant Internet of Things (IoT) applications such as autonomous vehicles and intelligent robots. However, it is challenging due to viewpoint changes, weak textures, and large-view scenes. To address the location ambiguities arising and positioning timeliness in complex scenes, we propose an efficient and precise location estimation method by priority matching-based pose verification. Priority matching-based pose verification consists of two modules: scene semantic verification and 3D–2D keypoints filtering, improving the performance of visual localization. For scene semantic verification, it effectively retrieves relevant 3D points that conform to the semantics of query images. This module compares the 3D points corresponding to the generated candidate poses with the query semantic image for semantic consistency, overcoming the ambiguity of scene descriptors in large-view and weak-texture scenes. For 3D–2D keypoints filtering, it considerably boosts pose verification and consequently improves pose accuracy under the scene with viewpoint changes. This module chooses 3D and 2D keypoints uniformly distributed in the scene by projection and voting, then backward matches with the query image. Experimental results show that our method achieves an average 78.86% probability of 1 m accuracy in viewpoint-changing, weak textures, and large-view scenes on the public InLoc indoor dataset, and improves the accuracy of the related state-of-the-art methods by 11.6% on three public outdoor datasets, including Aachen Day–night, RobotCar Seasons and CMU Seasons. These results demonstrate that our method provides a robust visual localization solution for edge-cloud collaborative IoT in complex and large-scale scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call