Abstract

Transfer learning is a common solution to address cross-domain identification problems in Human Activity Recognition (HAR). Most existing approaches typically perform cross-subject transferring while ignoring transfers between different sensors or body parts, which limits the application scope of these models. Only a few approaches have been made to design a versatile HAR approach (cross-subject, cross-sensor and cross-body-part). Unfortunately, these existing approaches depend on complex handcrafted features and ignore the inequality of samples for positive transfer, which will hinder the transfer performance. In this paper, we propose a framework for versa-tile cross-domain activity recognition. Specifically, the proposed framework allows end-to-end implementation by exploiting adaptive features from activity image instead of extracting handcrafted features. And the framework uses a two-stage adaptation strategy consisting of pretraining stage and re-weighting stage to perform knowledge transfer. The pretraining stage ensures transferability of the source domain as well as separability of the target domain, and the re-weighting stage rebalances the contribution of the two domain samples. These two stages enhance the ability of knowledge transfer. We evaluate the performance of the proposed framework by conducting comprehensive experiments on three public HAR datasets (DSADS, OPPORTUNITY, and PAMAP2), and the experimental results demonstrate the effectiveness of our framework in versatile cross-domain HAR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call