Abstract

Human Activity Recognition (HAR) through wear-able sensors greatly improves the quality of human life through its multiple applications in health monitoring, assisted living, and fitness tracking. For HAR, multi-sensor channel information is vital to performance. Current work states that applying an attention neural network to prioritize discriminatory sensor channels helps the model classify activity more precisely. However, getting discriminatory information from multisensory channels is not always trivial. For example, when collecting data from elderly hospitalized patients. In this context, existing HAR methods struggle to classify activities, particularly activities with similar natures. Moreover, HAR deep models predominantly suffer from overfitting due to small datasets, which leads to poor performance. Data augmentation is a viable solution to this problem. However, currently available data augmentation methods to HAR have various drawbacks, including the pos-sibility of being domain-dependent, and resulting in distorted models for test sequences. To address the aforementioned HAR problems, we propose a novel framework that primarily focuses on two aspects. First, enhancing the latent information across each sensor channel and learning to exploit the relation among multiple latent features and the ongoing activity. Consequently, this enriches the discriminatory feature representations of each activity. Second, a new augmentation strategy is introduced to address the shortcomings of existing multi-sensor channel data augmentation to generalize our HAR model. Our model outperforms existing state-of-the-art approaches on the four most commonly used HAR datasets from diverse domains. We exten-sively demonstrate the effectiveness of the proposed framework through detailed quantitative analysis of experimental results and ablation studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call