Abstract

Abstract In recent days, variety of approaches using deep learning features have been proposed for human action recognition due to the worth of deep neural networks. In this work, we put forward a new deep neural network architecture based on transfer learning [TL] for human action recognition. We illustrate how to progress the recognition of human activities with a video data set of small size using transfer learning. The model constructed is based on Inception ResNet convolutional neural networks (CNN) and long short term model (LSTM).We train the model by extracting feature vectors from Inception_ResNet_v2 and then the output feature vectors from CNN are applied onto the RNN for learning the action sequence. Then we attempt to classify the input videos according to the model trained. He accuracy score of the model was compared against VGG16, ResNet152 and Inception_v3 models studied on. The results prove the LSTM architecture using Inception_ ResNet_v2 provide the best accuracy score of 92%, 91% on UCI 101 and HMDB 51 data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call