Abstract
With the development of fingerprinting-based visual localization technology, a problem with this method is standing out, which is it takes great expenses in fingerprint collection. Recently, a few studies proposed some methods to alleviate this problem. However, the accuracy of the existing method is relatively low under some scenarios such as wide field of vision. In this paper, we propose a novel automatic visual fingerprinting (AVF) method for an indoor visual localization system. We consider the performance of AVF greatly hinges on visual odometry (VO) and ego-motion estimation (EME) block, which are two different ways of estimating fingerprint coordinates. Since both visual odometry and ego-motion estimation model are inaccurate, we build the least square model by second-order cone programming (SOCP). Our SOCP-based method is proposed to deal with the serious cumulative error and the random error introduced by VO and EME model, respectively. The purpose of this paper is improving the accuracy of the database generated by the AVF method under wide field of vision scenarios. Although the time costs are relatively higher than our compared method, fortunately, it is only on the off-line stage. The simulation results show that our method can provide a reliable image-location database with the consumer-grade smartphone camera.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.