Abstract

Human activity recognition (HAR) from inertial sensor data plays a pivotal role in various domains, such as healthcare, sports, and smart environments. In this paper, we present a groundbreaking approach, DeepHAR-Net, for enhancing the accuracy and robustness of human activity recognition using inertial sensor data. Traditional methods in this field often rely on handcrafted features and shallow models, which may struggle to capture the intricate patterns and nuances within complex activities. DeepHAR-Net overcomes these limitations by leveraging the power of deep learning to automatically learn hierarchical representations from raw sensor data. The proposed DeepHAR-Net architecture employs a novel combination of convolutional neural networks (CNNs) and long short-term memory (LSTM) networks. This fusion enables the model to effectively capture both spatial and temporal dependencies present in multi-dimensional sensor sequences. Additionally, we introduce a data augmentation strategy tailored to inertial sensor data, further enhancing the model's ability to generalize across variations in sensor placement and orientation. We rigorously evaluate DeepHAR-Net on benchmark datasets, comparing its performance against state-of-the-art methods. The experimental results demonstrate significant improvements in accuracy, outperforming existing techniques in various activity recognition scenarios. Notably, DeepHAR-Net showcases remarkable adaptability to different sensor configurations, showcasing its potential for real-world deployment in diverse applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call