Abstract

The integration of cyber-physical systems and artificial intelligent human activity recognition (HAR) applications enables intelligent interactions within a physical environment. In real-world HAR applications, the domain shift between training (source) and testing (target) images captured in different scenarios leads to low classification accuracy. Existing unsupervised domain adaptation (UDA) methods often require some labeled target samples for model adaptation, which limits their practicality. This study proposes a novel unsupervised deep domain adaptation algorithm (UDDAA) for HAR using recurrent neural networks. UDDAA introduces a maximum mean class discrepancy (MMCD) metric, accounting for both inter-class and intra-class differences within each domain. MMCD extends the maximum mean discrepancy to measure the class-level distribution discrepancy across source and target domains, aligning these distributions to enhance domain adaptation performance. Without relying on labeled target data, UDDAA predicts pseudo-labels for the unlabeled target dataset, combining these with labeled source data to train the model on domain-invariant representations. This approach makes UDDAA highly practical for scenarios where labeled target data is difficult or expensive to obtain, enabling human-computer interaction (HCI) systems to function effectively across varied environments and user behaviors. Extensive experiments on benchmark datasets demonstrate UDDAA's superior classification accuracy over existing baselines. Notably, UDDAA achieved 92% and 99% accuracy for University of Central Florida database (UCF) to Human Motion Database (HMDB) and HMDB to UCF transfers, respectively. Additionally, on personal recorded videos with complex backgrounds, it achieved high classification accuracies of 95% for basketball and 90% for football activities, underscoring its generalization ability, robustness, and effectiveness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.