Abstract

Human activity recognition from multimodal body sensor data has proven to be an effective approach for the care of elderly or physically impaired people in a smart healthcare environment. However, traditional machine learning techniques are mostly focused on a single sensing modality, which is not practical for robust healthcare applications. Therefore, recently increasing attention is being given by the researchers on the development of robust machine learning techniques that can exploit multimodal body sensor data and provide important decision making in Smart healthcare. In this paper, we propose an effective multi-sensors-based framework for human activity recognition using a hybrid deep learning model, which combines the simple recurrent units (SRUs) with the gated recurrent units (GRUs) of neural networks. We use the deep SRUs to process the sequences of multimodal input data by using the capability of their internal memory states. Moreover, we use the deep GRUs to store and learn how much of the past information is passed to the future state for solving fluctuations or instability in accuracy and vanishing gradient problems. The system has been compared against the conventional approaches on a publicly available standard dataset. The experimental results show that the proposed approach outperforms the available state-of-the-art methods.

Highlights

  • In recent years, human activity recognition (HAR) from wearable body sensor network is becoming popular due to its immense potential in many application areas such as smart healthcare, transportation, security, robotics and smart home [1]–[8]

  • The main contributions of this study are summarized in the following points: We propose an effective multi-sensors-based framework for human activity recognition using a hybrid deep learning model that combines simple and gated recurrent neural network units

  • We can see that the sensitivity and F1-score results of deep simple recurrent units (SRUs)-gated recurrent units (GRUs) model are higher than the results of mHealthDroid framework

Read more

Summary

Introduction

Human activity recognition (HAR) from wearable body sensor network is becoming popular due to its immense potential in many application areas such as smart healthcare, transportation, security, robotics and smart home [1]–[8]. HAR systems usually convert specific body movements sensed by various wearable body sensors to some sensor signal patterns, and can be classified using machine learning techniques [9]–[12]. It is not easy to identify an activity from multimodal body sensors data [13]–[15]. Traditional machine learning techniques are mostly focused on a single sensing modality, which is not practical for robust healthcare applications. In multimodal sensor data case, it is difficult to increase recognition accuracy while using fewer numbers of features. HAR from multimodal sensor data relies upon combinations of sensors, such as accelerometer sensors or gyroscope sensors [16]–[18]

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.