Abstract

In this work, a novel hybrid neural network with temporal attention (HNNTA) is proposed for inertial pedestrian localization. The HNNTA model employs the convolutional neural network (CNN) for extracting sectional features from the IMU data, followed by the long short-term memory (LSTM) network to capture the global temporal information. A temporal attention mechanism is designed to weigh the hidden states produced by the LSTM network and generate the final features for velocity prediction. Specifically, the proposed temporal attention mechanism is composed of the CNN feature refinement module and the <i>sigmoid</i> score normalization function. We utilize different 1D filters to refine the temporal hidden states from previous refined time indexes, and form the value matrix with each row containing different features along with the entire window time indexes and each column representing individual features from the same time spans. We then employ <i>sigmoid</i> function to normalize the dot-product alignment between features from different time spans and that of the last refined time index. We employ RoNIN dataset to evaluate HNNTA model, which contains the largest and most natural IMU measurements. We employ extensive erosion experiments to show the effectiveness of the HNNTA model design. Compared with the state-of-the-art method, the HNNTA model provides 10.39% higher 50th percentile accuracy for all phone carriers that have seen in the training set and 8.69% higher for those that have not been seen. The real-world experiments with IMU measurements collected in the CUHK campus further demonstrate the better generalisation capability of the HNNTA model.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.