Abstract

Automatic recognition of human activities using wearable sensors remains a challenging problem due to high variability in inter-person gait and movements. Moreover, finding the best on-body location for a wearable sensor is also critical though it provides valuable context information that can be used for accurate recognition. This article addresses the problem of classifying motion signals generated by multiple wearable sensors for the recognition of human activity and localisation of the wearable sensors. Unlike existing methods that used the raw accelerometer and gyroscope signals for extracting time and frequency-based features for activity inference, we propose to create frequency images for the raw signals and show this representation to be more robust. The frequency image sequences are generated from the accelerometer and gyroscope signals from seven different body parts. These frequency images serve as the input to our proposed two-stream Convolutional Neural Networks (CNN) for predicting the human activity and the location of the sensor generating the activity signal. We show that the complementary information collected by both accelerometer and gyroscope sensors can be leveraged to develop an effective classifier that can accurately predict the performed human activity. We evaluate the performance of the proposed method using the cross-subjects approach and show that it achieves an impressive F1-score of 0.90 on a publicly available real-world human activity dataset. This performance is superior to that reported by another state-of-the-art method on the same dataset. Moreover, we also experimented with the datasets from different body locations to predict the best position for the underlying task. We show that shin and waist are the best places on the body for placing sensors and this could help other researchers to collect higher quality activity data. We plan to publicly release the generated frequency images from all sensor positions and activities and our implementation code with the publication.

Highlights

  • The ubiquity and functionality of wearable devices such as smartphones, smartwatches, and fitness wristbands equipped with motion sensors create new opportunities for continuous monitoring of human physical activities [1]

  • Since many human activities can be reliably recognised based on the motion information, the automatic and accurate classification of motion signals generated by the motion sensors can facilitate the development of an effective automated human activity recogniser (HAR) for human-centred monitoring systems [2]

  • We conducted six extensive experiments such that each experiment was designed with specific goals including finding the best sensor location for activity recognition, comparison with existing activity recognition methods and validation of the robustness of the proposed Deep Human Activity and Location Recognition (DHALR) against other Convolutional Neural Networks (CNN)-based methods

Read more

Summary

Introduction

The ubiquity and functionality of wearable devices such as smartphones, smartwatches, and fitness wristbands equipped with motion sensors (e.g. accelerometer and gyroscope) create new opportunities for continuous monitoring of human physical activities [1]. HAR systems are been incorporated in many home entertainment products such as the Microsoft Kinect for the recognition of hand gestures and body movements to enhance gaming experience [5]. While the complementary motion information gathered by these multiple sensors can be combined to improve the accuracy of the activity recogniser, the detection of the on-body position of the sensors is important because the quality of automatic activity recognition depends largely on the position of sensor

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.