Abstract

Visual localization nowadays is a research hotspot in computer vision and photogrammetry. It can provide meter level or higher localization accuracy under the conditions without GPS signals. However, achieving efficient, robust and high-accuracy visual localization under the condition of day-night changes is still challenging. To deal with this problem, we develop an improved lightweight deep neural network with knowledge distillation to efficiently extract deep local features from imagery while maintaining strong robustness for day-night visual localization. Furthermore, to further improve the accuracy of visual localization, we use aligned dense LiDAR point clouds and imagery collected by a new portable camera-LiDAR integrated device to build a prior map, and directly utilize the 2D-3D correspondences between 2D local feature points extracted by our lightweight network and 3D laser points retrieved from the prior map for localization. Moreover, we build our own ground truth point cloud dataset at 5 cm accuracy to evaluate the accuracy of the constructed prior map as well as a day-night dataset including prior map and verification data for the evaluation of the proposed visual localization method. The experimental results prove that our visual localization method achieves a balance between the efficiency and robustness while improving localization accuracy for day-night visual localization. In a comparison with a variety of state-of-the-art local feature extraction methods based on deep neural networks, our lightweight network has the least number of parameters (0.2 million) and reaches the highest feature extraction efficiency (92 frames per second), which is on par with that of the classic real-time ORB feature extraction method. Furthermore, our network remains competitive with other advanced deep local feature extraction networks in feature matching and day-night visual localization. In addition, evaluations performed on our own dataset demonstrate that our visual localization method using images and LiDAR point clouds provides a localization error of 1.2 m under the conditions of day-night changes, which is much smaller than those achieved by a state-of-the-art, purely visual localization method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.