Abstract

Accurate stride-length estimation is a fundamental component in numerous applications, such as pedestrian dead reckoning, gait analysis, and human activity recognition. The existing stride-length estimation algorithms work relatively well in cases of walking a straight line at normal speed, but their error overgrows in complex scenes. Inaccurate walking-distance estimation leads to huge accumulative positioning errors of pedestrian dead reckoning. This paper proposes TapeLine, an adaptive stride-length estimation algorithm that automatically estimates a pedestrian’s stride-length and walking-distance using the low-cost inertial-sensor embedded in a smartphone. TapeLine consists of a Long Short-Term Memory module and Denoising Autoencoders that aim to sanitize the noise in raw inertial-sensor data. In addition to accelerometer and gyroscope readings during stride interval, extracted higher-level features based on excellent early studies were also fed to proposed network model for stride-length estimation. To train the model and evaluate its performance, we designed a platform to collect inertial-sensor measurements from a smartphone as training data, pedestrian step events, actual stride-length, and cumulative walking-distance from a foot-mounted inertial navigation system module as training labels at the same time. We conducted elaborate experiments to verify the performance of the proposed algorithm and compared it with the state-of-the-art SLE algorithms. The experimental results demonstrated that the proposed algorithm outperformed the existing methods and achieves good estimation accuracy, with a stride-length error rate of 4.63% and a walking-distance error rate of 1.43% using inertial-sensor embedded in smartphone without depending on any additional infrastructure or pre-collected database when a pedestrian is walking in both indoor and outdoor complex environments (stairs, spiral stairs, escalators and elevators) with natural motion patterns (fast walking, normal walking, slow walking, running, jumping).

Highlights

  • Accurate and pervasive indoor positioning significantly improves our daily life [1].The demand for accurate and practical location-based services anywhere using portable devices, such as smartphones, is quickly increasing in various applications, including asset and personnel tracking, health monitoring, precision advertising, and location-specific push notifications

  • Motivated by the fact that speech recognition based on deep learning outperforms other existing traditional speech recognition methods, this paper proposed a stride-length estimation method based on Long Short-Term Memory (LSTM) and Denoising

  • We propose a training frame for combining LSTM and DAE to deal with sequential data to extract the temporal feature denoise, and a stride-length estimation model based on the training frame

Read more

Summary

Introduction

Accurate and pervasive indoor positioning significantly improves our daily life [1].The demand for accurate and practical location-based services anywhere using portable devices, such as smartphones, is quickly increasing in various applications, including asset and personnel tracking, health monitoring, precision advertising, and location-specific push notifications. $7.11 billion in 2017 to $40.99 billion by 2022, at a Compound Annual Growth Rate of 42.0% during the forecast period [2] To meet this explosive demand, various indoor positioning approaches have recently been developed, including RFID [3], Wi-Fi [4,5], UWB [6], BLE [7], magnetic [1,8,9,10], visible light [11,12]. Positioning performance of propagation model-based methods depends on the deployment density of the reference points. These methods are ineffective when the radio signal is weak or not available in many scenarios, such as underground parking lots. It is important that fingerprint- or infrastructure-based positioning techniques are not available for emergency scenarios, such as anti-terrorism action, emergency rescues and exploration missions

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.