Abstract
Human activity recognition (HAR) leverages sensors such as accelerometers and gyroscopes to discern human physical activities, offering transformative insights for smart healthcare applications, from Parkinson's disease monitoring to diabetes management. While deep learning (DL) methods have emerged as frontrunners in HAR using wearable sensors, they often struggle with challenges stemming from long-term dependencies in sequential data and intricate feature extraction from expansive datasets. Addressing these gaps, this paper introduces a novel ensemble-based deep neural network that seamlessly integrates four distinct classifiers: CNN-LSTM, LSTM-CNN, CNN-BiLSTM, and BiLSTM-CNN. This combination enhances feature extraction and captures the detailed long-term patterns that have been challenging for traditional HAR models. By also using ensemble learning, our model becomes more consistent and reliable in its predictions. Benchmark evaluations on mHealth, UCI-HAR, and WISDM datasets validate the model's superiority, with accuracy scores of 99.91%, 98.10%, and 99.48% respectively. Further, the k-fold cross-validation technique is used to assess the performance results in terms of the mean F1-score, mean accuracy at k = 5. These compelling results underscore the model's capacity to address the inherent limitations of existing HAR systems, positioning it as a groundbreaking tool for advanced human activity recognition in smart healthcare scenarios.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have