Abstract

Automated action recognition is useful for improving the performance of the athletes through notational analysis. The notational analysis is usually used by the coach or notational analyst to study the movement patterns, strategy and tactics. Therefore, action recognition is the main key before further analysis can be done. This paper focused on developing an automated badminton action recognition using vision based dataset. 1496 badminton match image frames of 5 actions were studied – smash, clear, drop, net shot and lift. At first, the dataset was classified into 0.8:0.2 for training and testing the classification task by machine learning. Secondly, features of the training dataset were extracted using the Alexnet Convolutional Neural Network (CNN) model. In extracting the features, we introduced the new local feature extractor technique that extracts features at the fc8 layer. After collecting all the features at the fc8 layer, features were being classified by using machine learning classifier which is linear Support Vector Machine (SVM). The experiment was repeated using a normal global feature extractor technique. Lastly, both of the new local and global feature extractor techniques were repeated using GoogleNet CNN model to compare the performance between AlexNet and GoogleNet model. The results show that the new local feature extractor using AlexNet CNN model has the best performance accuracy which is 82.0%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call