Abstract

The self-regulated recognition of human activities from time-series smartphone sensor data is a growing research area in smart and intelligent health care. Deep learning (DL) approaches have exhibited improvements over traditional machine learning (ML) models in various domains, including human activity recognition (HAR). Several issues are involved with traditional ML approaches; these include handcrafted feature extraction, which is a tedious and complex task involving expert domain knowledge, and the use of a separate dimensionality reduction module to overcome overfitting problems and hence provide model generalization. In this article, we propose a DL-based approach for activity recognition with smartphone sensor data, i.e., accelerometer and gyroscope data. Convolutional neural networks (CNNs), autoencoders (AEs), and long short-term memory (LSTM) possess complementary modeling capabilities, as CNNs are good at automatic feature extraction, AEs are used for dimensionality reduction and LSTMs are adept at temporal modeling. In this study, we take advantage of the complementarity of CNNs, AEs, and LSTMs by combining them into a unified architecture. We explore the proposed architecture, namely, “ConvAE-LSTM”, on four different standard public datasets (WISDM, UCI, PAMAP2, and OPPORTUNITY). The experimental results indicate that our novel approach is practical and provides relative smartphone-based HAR solution performance improvements in terms of computational time, accuracy, F1-score, precision, and recall over existing state-of-the-art methods.

Highlights

  • Human activity recognition (HAR) has been a popular research area for several decades due to its wide applications in smart health care, ambient assisted living, disease prediction, video surveillance, remote health care and so on [1], [2]

  • 2) We propose convolutional AE (ConvAE)-long short-term memory (LSTM), which is a novel Deep learning (DL) architecture that (a) can automatically extract features from unlabeled raw sensory data, (b) uses fewer parameters due to the presence of a convolution layer that minimizes the risk of overfitting, (c) reduces the required computational time and d) enhances the accuracy of HAR. 3) We demonstrate the effectiveness of our proposed ConvAE-LSTM network through empirical experiments on two different standard public smartphone sensor-based HAR datasets the same experimental environment

  • PERFORMANCE EVALUATION We present the experimental results of our proposed method (ConvAE-LSTM) on two smartphone sensor-based public standard datasets (UCI [11] and WISDM [10]) and two body-worn, sensor-based public standard datasets (OPPORTUNITY [82] and PAMAP2 [88])

Read more

Summary

INTRODUCTION

Human activity recognition (HAR) has been a popular research area for several decades due to its wide applications in smart health care, ambient assisted living, disease prediction, video surveillance, remote health care and so on [1], [2]. 2) We propose ConvAE-LSTM, which is a novel DL architecture that (a) can automatically extract features from unlabeled raw sensory data, (b) uses fewer parameters due to the presence of a convolution layer that minimizes the risk of overfitting, (c) reduces the required computational time and d) enhances the accuracy of HAR. In [73], the authors proposed CNN- and LSTM-based HAR solutions using two different public datasets collected by wearable sensors. Ye et al [78] suggested a two-stream convolutional network-based ‘‘convolutional LSTM’’ architecture to recognize various daily life activities They used the HMDB51 and UCF101 video datasets and extracted features by using the convolution layer of the CNN. Xia et al [80] suggested an LSTM-CNN-based HAR framework to identify different daily life activities using three different datasets: UCI, WISDM, and OPPORTUNITY. Where wand bare the weights and biases of the decoder, respectively

THE PROPOSED MODEL
PERFORMANCE EVALUATION
EXPERIMENTAL RESULTS OBTAINED ON THE OPPORTUNITY AND PAMAP2 DATASETS
STATISTICAL ANALYSIS
DISCUSSIONS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call