Abstract

Smartwatches are becoming more popular for recognizing and monitoring human actions in everyday life. These wearable devices are equipped with various IMU sensors for ubiquitous data processing and recording of human physical activity data. Sensor-based human activity recognition (HAR) has risen to the top of the list of the most active research topic due to its widely real-life applications in various practical domains, such as healthcare monitoring, sports and exercise tracking, and misbehavior prevention. Many machine learning and deep learning approaches have been recently proposed to solve the problem of human activity recognition, focusing on activities of daily living. However, an exciting and challenging HAR topic deals with more complex human activities such as eating-related activities. This paper proposes a sensor-based HAR framework using data from eating-related activities recorded by a smartwatch sensor. In this framework, five d eep learning networks (CNN, LSTM, BiLSTM, Stacked LSTM, CNN-LSTM, and LSTM-CNN) are evaluated for their recognition of eating-related activities. To ensure the model’s dependability, data from eating-related activities on the standard publicly available dataset WISDM-HARB are utilized to evaluate the proposed framework using state-of-the-art metrics: accuracy and confusion matrices. Experiment findings demonstrate that the S tacked LSTM model outperforms other deep learning models, achieving an accuracy of 97.37%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call