Abstract

A sports training video classification model based on deep learning is studied for targeting low classification accuracy caused by the randomness of objective movement in sports training video. The camera calibration technology is used to restore the position of the target in the real three-dimensional space. After the camera calibration in the video, the sports training video is preprocessed. The input video segment is divided into equal length segments to obtain the subvideo segment. The motion vector field, brightness feature, color feature, and texture feature of the subvideo segment are extracted, and the extracted features are input into the AlexNet convolutional neural network. ReLU is used as the activation function in this convolutional neural network. Local response normalization is used to suppress and enhance the output of neurons to highlight the performance of useful information, so that the output classification results are more accurate. Event matching method is used to match the convolutional neural network output to complete the sports training video classification. The experimental results of the proposed study show that the model can effectively solve the problems of target moving randomness. The classification accuracy of sports training video is more than 99%, and the classification speed is faster which is shown from the results of the experiments.

Highlights

  • With the rapid development of multimedia technology, sports get unprecedented attention and development. e mainstream research work of sports training video includes field and ground wire detection, player detection, recognition and tracking, camera calibration, event detection, and video abstract extraction. e classification of sports training video based on semantic information refers to the use of machine vision technology to automatically identify the types of sports training on the field and give the recognition results by using a certain way of expression [1]

  • Aiming at the shortcomings of existing approaches of sports training video classification, this paper establishes sports training video classification model based on deep learning method

  • Convolution neural network with deep learning is used for the classification purpose in the proposed research

Read more

Summary

Introduction

With the rapid development of multimedia technology, sports get unprecedented attention and development. e mainstream research work of sports training video includes field and ground wire detection, player detection, recognition and tracking, camera calibration, event detection, and video abstract extraction. e classification of sports training video based on semantic information refers to the use of machine vision technology to automatically identify the types of sports training on the field and give the recognition results by using a certain way of expression [1]. E mainstream research work of sports training video includes field and ground wire detection, player detection, recognition and tracking, camera calibration, event detection, and video abstract extraction. E multitarget tracking method based on support vector regression particle filter was used to extract the trajectory of players and football, and the interactive space-time information between players and football trajectory was used to achieve tactical behavior expression and recognition in football game. Niu et al achieved camera calibration by detecting and tracking the ground wire in the video image and achieved tactical behavior expression and recognition by using the space-time trajectory information of the interaction between players and football in real space. Due to the mutual occlusion between targets, the randomness of target movement, and the complexity of the environment background, there are still many problems in the accuracy and persistence of target tracking; secondly, because the sports training video is mainly based on long-distance view, the identification of players and balls is poor under complex lighting conditions

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call