Abstract

Human activity recognition (HAR) based on Wi-Fi signals has attracted significant attention due to its convenience and the availability of infrastructures and sensors. Channel State Information (CSI) measures how Wi-Fi signals propagate through the environment. However, many scenarios and applications have insufficient training data due to constraints such as cost, time, or resources. This poses a challenge for achieving high accuracy levels with machine learning techniques. In this study, multiple deep learning models for HAR were employed to achieve acceptable accuracy levels with much less training data than other methods. A pretrained encoder trained from a Multi-Input Multi-Output Autoencoder (MIMO AE) on Mel Frequency Cepstral Coefficients (MFCC) from a small subset of data samples was used for feature extraction. Then, fine-tuning was applied by adding the encoder as a fixed layer in the classifier, which was trained on a small fraction of the remaining data. The evaluation results (K-fold cross-validation and K = 5) showed that using only 30% of the training and validation data (equivalent to 24% of the total data), the accuracy was improved by 17.7% compared to the case where the encoder was not used (with an accuracy of 79.3% for the designed classifier, and an accuracy of 90.3% for the classifier with the fixed encoder). While by considering more calculational cost, achieving higher accuracy using the pretrained encoder as a trainable layer is possible (up to 2.4% improvement), this small gap demonstrated the effectiveness and efficiency of the proposed method for HAR using Wi-Fi signals.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.