Abstract

Human Activity Recognition (HAR) plays an important role in behavior analysis, video surveillance, gestures recognition, gait analysis, and posture recognition. Given the recent progress of Artificial Intelligence (AI) applied to HAR, the inputs that are the data from wearable sensors can be treated as time-series from which movement events can be classified with high accuracy. In this study, a dataset of raw sensor data served as input to four different deep learning networks (DNN, CNN, LSTM, and CNN-LSTM). Differences in accuracy and learning time were then compared and evaluated for each model. An analysis of HAR was made based on an attempt to classify three activities: walking, sit-to-stand, and squatting. We also compared the performance of two different sensor data types: 3-axis linear acceleration measured from two inertial measurement units (IMUs) versus 3D acceleration of two retro-reflective markers from the high-end optoelectronic motion capture system (MOCAP). The dataset created from observations of ten subjects was preprocessed with labelling and sliding windows and then used as input to the four frameworks. The results indicate that, for HAR prediction, linear accelerations estimated using IMUs are as reliable as those measured using the MOCAP system. Also, the use of the hybrid CNN-LSTM framework for both methods resulted in higher accuracy (99%).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call