Abstract

The advancement of microelectromechanical systems (MEMS) and the Internet of Things (IoT) have enabled a wide range of applications based on smartphones. However, the existing navigation methods using these low-cost MEMS sensors cannot provide acceptable information for location-based applications in various environments. Their technical limitations, such as severe signal attenuation, reflections, blockages, error accumulation, and low quality of images degrade the performance of GNSS, INS, and camera. To mitigate these limitations, especially in indoor vehicle navigation, we first analyze the performance of the existing fusion algorithm, then we propose Semantic Proximity Update (SPU) based on a pre-trained model of real-time object detection to enhance the integration of Global Navigation Satellite System (GNSS), Inertial Navigation System (INS) and Visual Inertial Navigation System (VINS). SPU consists of the detection of geo-referenced objects and the relative movement to infer the absolute position. The proposed INS/GNSS/VINS/SPU can maintain long-term acceptable accuracy regardless of the indoor/outdoor environment. It only requires the use of smartphone sensors; thus, this scheme has no additional cost for users. Experimental results indicated that the errors of this scheme in horizontal positioning and three-dimensional positioning were 51.6% and 86.8% lower, respectively than those of a conventional integration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call