Abstract

Waypoint Estimation (WE) has a wide range of applications for indoor walkers, such as fire rescue, and navigation to find exit doors, lifts, or stairs as examples of waypoints, etc. Data-driven waypoint estimation has been on the rise with advancements in deep learning algorithms. The current waypoint estimation methods, however, face two challenges. On one hand, most waypoint detection approaches rely on visual sensors, hence, their estimation performance is limited by light when collecting visual data. On the other hand, data-driven methods necessitate a large number of labeled data to train a WE model, which significantly increases the time spent manually marking labels. Targeting the above two challenges, our work first proposes a novel Deep Bayesian Active Learning Waypoint Estimator for indoor walkers (DeepWE) based on Human Activity Recognition (HAR). This estimates six indoor waypoints through walkers daily activities due to the strong correlation between human activities and waypoints. Firstly, an initial DeepWE model is developed using a Bayesian ensembled Convolutional Neural Network (B-CNN) using the accelerometer and gyroscope data. Then, active learning is employed to query the most formative samples from pool points with four acquisition functions, and only these queried samples are labeled manually. Finally, the initial DeepWE model is updated from this labeled data using an incremental learning algorithm. Empirical results on two publicly available USC-HAD and OPPORTUNITY datasets show DeepWE performs a considerable accuracy boost for waypoint estimation, with a substantial amount of acquired pool points reduction (more than 40%)

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.