Abstract

Athlete detection and action recognition in sports video is a very challenging task due to the dynamic and cluttered background. Several attempts for automatic analysis focus on athletes in many sports videos have been made. However, taekwondo video analysis remains an unstudied field. In light of this, a novel framework for automatic techniques analysis in broadcast taekwondo video is proposed in this paper. For an input video, in the first stage, athlete tracking and body segmentation are done through a modified Structure Preserving Object Tracker. In the second stage, the de-noised frames which completely contain the body of analyzed athlete from video sequence, are trained by a deep learning network PCANet to predict the athlete action of each single frame. As one technique is composed of many consecutive actions and each action corresponds a video frame, focusing on video sequences to achieve techniques analysis makes sense. In the last stage, linear SVM is used with the predicted action frames to get a techniques classifier. To evaluate the performance of the proposed framework, extensive experiments on real broadcast taekwondo video dataset are provided. The results show that the proposed method achieves state-of-the-art results for complex techniques analysis in taekwondo video.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.