Abstract

Wearable Human Activity Recognition (WHAR) is an important research field of ubiquitous and mobile computing. Deep WHAR models suffer from the overfitting problem caused by the lack of a large amount and variety of labeled data, which is usually addressed by generating data to enlarge the training set, i.e., Data Augmentation (DA). Generative Adversarial Networks (GANs) have shown their excellent data generation ability, and the generalization ability of a classification model can be improved by GAN-based DA. However, existing GANs cannot make full use of the important modality information and fail to balance modality details and global consistency, which cannot meet the requirements of deep multi-modal WHAR. In this paper, a hierarchical multi-modal GAN model (HMGAN) is proposed for WHAR. HMGAN consists of multiple modal generators, one hierarchical discriminator, and one auxiliary classifier. Multiple modal generators can learn the complex multi-modal data distributions of sensor data. Hierarchical discriminator can provide discrimination outputs for both low-level modal discrimination losses and high-level overall discrimination loss to draw a balance between modality details and global consistency. Experiments on five public WHAR datasets demonstrate that HMGAN achieves the state-of-the-art performance for WHAR, outperforming the best baseline by an average of 3.4%, 3.8%, and 3.5% in accuracy, macro F1 score, and weighted F1 score, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call