Abstract

Human activity recognition (HAR) is a key component of mobile and ubiquitous computing. With the increasing number of sensors embedded in devices such as smartphones and smartwatches, models are enhanced to recognize more complex human behaviors. However, this also leads to increased data heterogeneity, caused by diverse user backgrounds, health conditions, and sensing environments, which makes it difficult to align the multimodal data distributions across individuals. This paper proposes a novel unsupervised domain adaptation framework via Adversarial Time-Frequency Attention (ATFA) to efficiently adapt models to new users. In particular, the proposed attention-based modality fusion module captures and fuses important modalities based on their context to reduce redundant information. Additionally, the network explores frequency domain features to improve performance in recognizing human activities. Extensive experiments are conducted on three publicly available HAR datasets to demonstrate the superiority of our proposed method compared to state-of-the-art baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call