Abstract

While deep learning models have contributed to the advancement of sensor-based human activity recognition (HAR), it usually requires large amounts of annotated sensor data to extract robust features. To alleviate the limitations of data annotation, contrastive learning has been applied to sensor-based HAR. One of the essential factors of contrastive learning is data augmentation, significantly impacting the performance of pretraining. However, current popular augmentation methods do not achieve competitive performance in contrastive learning for sensor-based HAR. Motivated by this issue, we propose a new sensor data augmentation method by resampling, which introduces variable domain information and simulates realistic activity data by varying the sampling frequency to maximize the coverage of the sampling space. The resampling augmentation method was evaluated in supervised learning and contrastive learning [SimCLR for HAR (SimCLRHAR) and MoCo for HAR (MoCoHAR)]. In the experiment, we use four datasets, UCI-HAR, MotionSense, USC-HAD, and MobiAct, using the mean F1-score as the evaluation metric for downstream tasks. The experimental results show that the resampling data augmentation outperforms all state-of-the-art augmentation methods in supervised learning and contrastive learning with a small amount of labeled data. The results also demonstrate that not all data augmentation methods have positive effects in contrastive learning frameworks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call