Abstract

Human activity recognition without equipment plays a vital role in smart home applications, freeing humans from the shackles of wearable devices. In this paper, by using the channel state information (CSI) of the WiFi signal, semi-supervised transfer learning with dynamic associate domain adaptation is proposed for human activity recognition. In order to improve the CSI quality and denoising of CSI, we carried out missing packet filling, burst noise removal, background estimation, feature extraction, feature enhancement, and data augmentation in the data pre-processing stage. This paper considers the problem of environment-independent human activity recognition, also known as domain adaptation. The pre-trained model is trained from the source domain by collecting a complete labeled dataset of all of the CSI of human activity patterns. Then, the pre-trained model is transferred to the target environment through the semi-supervised transfer learning stage. Therefore, when humans move to different target domains, a partial labeled dataset of the target domain is required for fine-tuning. In this paper, we propose a dynamic associate domain adaptation called DADA. By modifying the existing associate domain adaptation algorithm, the target domain can provide a dynamic ratio of labeled dataset/unlabeled dataset, while the existing associate domain adaptation algorithm only allows target domains with the unlabeled dataset. The advantage of DADA is that it provides a dynamic strategy to eliminate different effects on different environments. In addition, we further designed an attention-based DenseNet model, or AD, as our training network, which is modified by an existing DenseNet by adding the attention function. The solution we proposed was simplified to DADA-AD throughout the paper. The experimental results show that for domain adaptation in different domains, the accuracy of human activity recognition of the DADA-AD scheme is 97.4%. It also shows that DADA-AD has advantages over existing semi-supervised learning schemes.

Highlights

  • This paper addresses the problem of recognizing human activity independent of the environment, known as domain adaptation

  • We have proposed semi-supervised transfer learning with dynamic adaptation of the associate domain in order to recognize human activity

  • The pre-trained model is trained from the source domain by collecting a complete set of labeled data for all of the human activity patterns of the channel state information (CSI)

Read more

Summary

Introduction

To significantly improve the recognition accuracy, efforts are made to slightly increase the retraining complexity by performing the fine-tune operation by adding the dynamic number of CSI with labeled data in the new environment. The experimental results show that for domain adaptation in different domains, the accuracy of human activity recognition of the DADA-AD scheme is 97.4%. To increase the recognition accuracy, an attention-based DenseNet model (AD) is designed as our new training network This modifies the existing DenseNet model and ECA-NeT (efficient channel attention-net) model. To avoid the loss of hidden information, we incorporate the ECA structure to strengthen the important channels of previous layers, and bring it to the denseblock of DenseNet. Our experimental results show that the accuracy of AD as our training network is increased by 4.13%, compared to the existing HAR-MN-EF scheme [19].

Related Works
Motivation
System Model
Problem Formulation
Basic Idea
Data Collection and Processing Phase
Pre-Training Phase e between the signals is obtained to reduce the data
Dynamic Associate Domain Adaptation Phase
Associate Knowledge Fine-Tuning Phase
Experimental Setup
Performance Analysis
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.