Abstract

Together with the rapid development of the Internet of Things, human activity recognition (HAR) using wearable inertial measurement units (IMUs) becomes a promising technology for many research areas. Recently, deep-learning-based methods pave a new way of understanding and performing analysis of the complex data in the HAR system. However, the performance of these methods is mostly based on the quality and quantity of the collected data. In this article, we innovatively propose to build a large data set based on virtual IMUs and then address technical issues by introducing a multiple-domain deep learning framework consisting of three technical parts. In the first part, we propose to learn the single-frame human activity from the noisy IMU data with hybrid convolutional neural networks in the semisupervised form. For the second part, the extracted data features are fused according to the principle of uncertainty-aware consistency, which reduces the uncertainty by weighting the importance of the features. The transfer learning is performed in the last part based on the newly released archive of motion capture as surface shapes data set, containing abundant synthetic human poses, which enhances the variety and diversity of the training data set and is beneficial for the process of training and feature transfer in the proposed method. The efficiency and effectiveness of the proposed method have been demonstrated in the real deep inertial poser data set. The experimental results show that the proposed methods can surprisingly converge within a few iterations and outperform all competing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call