Abstract

Human activity recognition is considered one of the most promising fields of inquiry in computer science research. The process of interpreting human body gestures to determine human activities has been widely applied in health care and human–computer interaction. Different researchers have employed various methods in this domain, like wearable, device-free, and object tags, to successfully recognize human activities. In this paper, a detailed analysis is made on the processing of HAR sensor data by using two deep neural network models, convolutional and recurrent neural networks, and compared their respective accuracies. The data is utilized to create the appropriate DNN model provided by the wireless sensor data mining (WSDM) lab. The data was collected from 36 people carrying wearable sensors and performing six different activities: (1) walking, (2) sitting, (3) upstairs, (4) downstairs, (5) jogging, (6) standing and performing each activity several times. In the end, we discuss the two approaches (CNN and RNN) based on their activity recognition performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call