Abstract

The need for an accurate indoor positioning system has rapidly increased with the development of large complex malls and underground spaces. As signals from the global positioning system cannot be received inside buildings, only approximate locations can be estimated using Wi-Fi routers or cellular base station information, and exact locations cannot be determined. Therefore, a pedestrian dead reckoning (PDR) scheme using several sensors is suggested in this work. However, this scheme requires users to hold their smartphones in a specific manner; furthermore, user-dependent parameters, such as height and step length, are necessary because the sensor parameters vary. This study uses deep-learning algorithms to overcome the limitations of the existing smartphone-based PDR scheme. A convolutional neural network algorithm is used to classify the smartphone positions; then, appropriate sensor data are selected and adjusted. The long short-term memory algorithm is used to estimate the user step length. Although the PDR performance is enhanced using the deep-learning algorithm, accumulated error is unavoidable because the algorithm traces the relative position with reference to the original location. Therefore, optical camera communication is introduced to provide the reference location and periodically compensate for the accumulated PDR error. The proposed algorithm is experimentally demonstrated, and its results are obtained and analyzed.

Highlights

  • With the increasing development of large complex malls and underground spaces, the need for indoor location recognition is increasing

  • ENVIRONMENT OF THE TESTBED An experiment was conducted while walking along a rectangular path of 16.6 m width and 9.5 m length to test the deep-learning-based pedestrian dead reckoning (PDR) and optical camera communication (OCC) proposed in this study

  • When only the PDR was used without deep learning, the smartphone position was identified using various sensor values, and the step length was calculated using the peak values of the accelerometer [2]

Read more

Summary

INTRODUCTION

With the increasing development of large complex malls and underground spaces, the need for indoor location recognition is increasing. We employ deep-learning algorithms, including CNN and LSTM, to detect smartphone positions and strides, respectively These results are combined with those of the PDR scheme to enhance location estimation. OCC using digital zoom is used to receive the data when the distance between the LED lamp and camera exceeds several meters [15] In this manner, the position estimation errors due to PDR. If the walking path is set separately from the LED lamp, and the smartphone faces the lamp to receive data, as shown, the location information from the lamp can be corrected using the following equations In these equations, α is the tilt angle toward the lamp, H is the height of the lamp, and ( x, y) is the correction distance. The location errors can be measured within 15 cm for the distance between the LED and smartphone in two dimensions

ENVIRONMENT OF THE TESTBED
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call