Abstract
Deep learning techniques for human activity recognition are gaining popularity due to their effectiveness in identifying intricate tasks and their cost-effectiveness compared to conventional machine learning methods. Human Activity Recognition (HAR) is a research domain concerned with detecting the everyday activities performed by individuals using time-series data captured by sensors. HAR encompasses a wide range of applications, including surveillance, baby monitoring, elderly healthcare, and smart driving. This article provides a brief introduction to the application of deep learning in HAR. It covers the fundamental concepts of CNNs and LSTMs, their strengths in capturing spatial and temporal features, and their integration for enhanced activity recognition. Different approaches are employed in HAR to address problems with efficiency and precision. Conventional human activity recognition (HAR) systems rely on wearable devices like IMUs and stretch sensors to identify different activities. These systems have proven to be effective in recognizing simple user actions like sitting, standing, and walking. However, when it comes to more intricate activities like running, jumping, wrestling, and swinging, sensor-based HAR systems encounter greater misclassification rates due to inaccuracies in sensor readings. These errors significantly impact the accuracy of the HAR system, resulting in suboptimal classification outcomes. In contrast, employing vision-based HAR systems enables improved accuracy in identifying complex activities, leading to enhanced overall performance.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have