Abstract

Wearable sensors have become increasingly popular in recent years, with technological advances leading to cheaper, more widely available, and smaller devices. As a result, there has been a growing interest in applying machine learning techniques for Human Activity Recognition (HAR) in healthcare. These techniques can improve patient care and treatment by accurately detecting and analyzing various activities and behaviors. However, current approaches often require large amounts of labeled data, which can be difficult and time-consuming to obtain. In this study, we propose a new approach that uses synthetic sensor data generated by 3D engines and Generative Adversarial Networks to overcome this obstacle. We evaluate the synthetic data using several methods and compare them to real-world data, including classification results with baseline models. Our results show that synthetic data can improve the performance of deep neural networks, achieving a better F1-score for less complex activities on a known dataset by 8.4% to 73% than state-of-the-art results. However, as we showed in a self-recorded nursing activity dataset of longer duration, this effect diminishes with more complex activities. This research highlights the potential of synthetic sensor data generated from multiple sources to overcome data scarcity in HAR.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.