Abstract

Indoor positioning is in high demand in a variety of applications, and indoor environment is a challenging scene for visual positioning. This paper proposes an accurate visual positioning method for smartphones. The proposed method includes three procedures. First, an indoor high-precision 3D photorealistic map is produced using a mobile mapping system, and the intrinsic and extrinsic parameters of the images are obtained from the mapping result. A point cloud is calculated using feature matching and multi-view forward intersection. Second, top-K similar images are queried using hamming embedding with SIFT feature description. Feature matching and pose voting are used to select correctly matched image, and the relationship between image points and 3D points is obtained. Finally, outlier points are removed using P3P with the coarse focal length. Perspective-four-point with unknown focal length and random sample consensus are used to calculate the intrinsic and extrinsic parameters of the query image and then to obtain the positioning of the smartphone. Compared with established baseline methods, the proposed method is more accurate and reliable. The experiment results show that 70 percent of the images achieve location error smaller than 0.9 m in a 10 m × 15.8 m room, and the prospect of improvement is discussed.

Highlights

  • With the development of smartphones and web Geographic Information System (GIS), location-based services (LBS) are changing people’s daily lives

  • This study proposes an indoor visual positioning solution which matches smartphone camera image with high-precision

  • To evaluate the proposed method, two different places are used in the experiment

Read more

Summary

Introduction

With the development of smartphones and web Geographic Information System (GIS), location-based services (LBS) are changing people’s daily lives. This study proposes an indoor visual positioning solution which matches smartphone camera image with high-precision. Image-based positioning has been the focus of research in the outdoors These methods usually contain two steps, namely, place recognition [14] and perspective-n-point(PnP) [15], which calculates the extrinsic parameters. Matching images with the feature point cloud instead of database images improves the efficiency of the localization procedure [31] Most of these methods are used in outdoor environments. Coded reference labels on walls are used to locate indoor images These methods obtain a high accuracy from decimeters to meters [33]. In this paper, using large scale indoor images with known pose parameters as a database, an automatic and robust visual positioning method is proposed.

Methodology
High-Precision 3D Photorealistic Map
Build Feature Database
Image Feature Matching
Comparison betweentime the of
Feature Points to 3D Point Cloud
Smartphone Visual Positioning
Place Recognition and Feature Matching
Comparison
Image Positioning
Test Data
Evaluation
Discussion
17. Change
Procedure
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.