Abstract

AbstractHuman activity recognition (HAR) is an emerging challenge among researchers. HAR has many possible uses in various fields, including healthcare, sports, and security. Furthermore, there are only a few publicly accessible datasets for classifying and recognizing physical activity in the literature, and these datasets comprise fewer activities. We created and compared our dataset with available datasets, that is, NTU-RGBD, UP-FALL, UR-Fall, WISDM, and UCI HAR. The proposed dataset consists of seven activities: eating, exercise, handshake, situps, vomiting, headache, and walking. The activities were collected from 20 people between the ages of 25 and 40 years using Kinect V2 sensor at 30 FPS. For classification, we use deep learning architectures based on convolutional neural network (CNN) and long short-term memory (LSTM). Additionally, we developed a novel hybrid deep learning model by combining a CNN, a bidirectional LSTM unit, and a fully connected layer for activity identification. The suggested model builds unique guided features using the preprocessed skeleton coordinates and their distinctive geometrical and kinematic aspects. Results from the experiment are contrasted with the performance of stand-alone CNNs, LSTMs, and ConvLSTM. The proposed model’s accuracy of 99.5% surpasses that of CNN, LSTM, and ConvLSTM, which have accuracy rates of 95.76%, 97%, and 98.89%, respectively. Our proposed technique is invariant of stance, speed, individual, clothes, etc. The proposed dataset sample is accessible to the general public.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call