Abstract

This study focuses on improving the performance of human activity recognition when a small number of sensor data are available under some special practical scenarios and resource-limited environments, such as some high-risk projects, anomaly monitoring and actual tactical scenarios. The Human Activity Recognition (HAR) based on wearable sensors is an attractive research topic in machine learning and ubiquitous computing over the last few decades, and has extremely practicality in health surveillance, medical assistance, personalized services, etc. However, with the limitation of sensor sampling rate, sustainability, deployment, and other restricted conditions, it is difficult to collect enough and resultful sensor data anywhere, anytime. Therefore, the HAR based on wearable sensors always faces the challenges of the low-data regime under some practical scenarios, which leads to a low accuracy of activity recognition and needs to be solved urgently. Currently, the Generative Adversarial Networks (GANs) provide a powerful method for training resultful generative models that could generate very convincing verisimilar images. The framework of GANs and its variants shed many lights on improving the performance of HAR. In this paper, we propose a new generative adversarial networks framework called SensoryGANs that can effectively generate available sensor data used for HAR. To the best of our knowledge, SensoryGANs is the first unbroken generative adversarial networks applied in generating sensor data in the HAR research field. Firstly, we tried exploring and devising three activity-special GANs models for three human daily activities. Secondly, these specific models are trained with the guidance of unbroken vanilla GANs. Thirdly, the trained generators from adversarial optimization process are used to generate synthetic sensor data. Finally, the synthetic sensor data from SensoryGANs are used to enrich the original authentic sensor datasets, which can improve the performance of target activity recognition model. Meanwhile, we propose three visual evaluation methods for assessing synthetic sensor data produced by the trained generators in SensoryGANs models. Experimental results show that SensoryGANs models have the capability of capturing the implicit distribution of real sensor data of human activity, and then the synthetic sensor data generated by SensoryGANs models Have a potential for improving human activity recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call