Abstract

Smartphones and wearable devices have built-in sensors that can collect multivariant time-series data that can be used to recognize human activities. Research on human activity recognition (HAR) has gained significant attention in recent years due to its growing demand in various application domains. As wearable sensor-aided devices and the Internet of Things (IoT) became more common, great attention has been paid to the HAR ubiquitous computing and mobile computing. To infer human activity data from a massive amount of multivariant data generated by different wearable devices, in this study an innovative deep learning–based model named HAR-DeepConvLG is proposed. It includes three convolution layers and a squeezing and excitation (SE) block, which are employed to precisely learn and extract the spatial representation data from the collected raw sensor data. The extracted features are used as input of three parallel paths, each of which includes a long short-term memory (LSTM) layer connected in sequence with a gated recurrent unit (GRU) layer to learn temporal representation. The three paths are connected in parallel to avoid the vanishing gradient problem. Finally, to evaluate the effectiveness of the proposed model, experiments were conducted on four widely utilized HAR datasets. Additionally, the model’s performance was compared to several state-of-the-art deep learning models, which further validated its effectiveness. The experimental results show that the proposed HAR-DeepConvLG model performs better than the existing HAR deep learning–based models, achieving a competitive classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call