Abstract

Video-based human action recognition is one of the most important and challenging areas of research in the field of computer vision. Human action recognition has found many pragmatic applications in video surveillance, human–computer interaction, entertainment, autonomous driving, etc. Owing to the recent development of deep learning methods for human action recognition, the performance of action recognition has significantly enhanced for challenging datasets. Deep learning techniques are mainly used for recognizing actions in images and videos comprising of Euclidean data. A recent development in deep learning methods is the extension of these techniques to non-Euclidean data or graph data with many nodes and edges. Human body skeleton resembles a graph, therefore, the graph convolutional network (GCN) is applicable to the non-Euclidean body skeleton. In the past few years, GCN has emerged as an important tool for skeleton-based action recognition. Therefore, we conduct a survey using GCN methods for action recognition. Herein, we present a comprehensive overview of recent GCN techniques for action recognition, propose a taxonomy for the categorization of GCN techniques for action recognition, carry out a detailed study of the benchmark datasets, enlist relevant resources and open-source codes, and finally provide an outline for future research directions and trends. To the best of authors’ knowledge, this is the first survey for action recognition using GCN techniques. Impact Statement— Graph convolutional neural networks have marked a great progress in recent years. There is a similarity between body skeleton and a graph; therefore, GCNs have been widely used for skeleton-based action recognition. In this article, we summarize recent graph-based action recognition techniques, provide a deeper insight of these methods, list source-codes and available resources. This article will help the researchers to develop a basic understanding of graph convolutional methods for action recognition, benefit from useful resources, and think about future directions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call