Abstract

AbstractWith the development of smart technologies (e.g., Internet of Things (IoT), Artificial Intelligence (AI), and Big Data), people have start using them for various purposes. The real‐time IoT devices generate an enormous amount of video and imaging data, leading to the concept of complex data structure. And people face major challenges in mining and extraction of useful features and information from such data. How to efficiently analyse and process video data to obtain valuable information has become a key research topic. The traditional manual annotation methods are unable to meet the current demand for the growing number of videos. Therefore, a more convenient method for processing video data needs to be developed. The research objective of this paper is dance videos, and the goal is to realize automatic recognition of dance movements. In this paper, a Dual Convolutional Neural Network Algorithm (DCNNA) is proposed for the automatic recognition of different dance movements in live and remote videos. DCNNA can extract video information more comprehensively and efficiently. It can simultaneously extract the light flow features corresponding to the action changes and the information contained in each frame of the video. Therefore, the dance movements can be more accurately identified. In the experiments, the performance of DCNNA is evaluated based on dance videos and compared with Inception V3 and 3D‐CNN. All the experiments illustrate the superior performance of the proposed DCNN algorithm. From the experimental results, it is quite obvious that the F1 score of the proposed DCNNA is 11% and 6% higher than that of the Inception V3 and 3D‐CNN, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call