Abstract
Evaluating the performance of players in a dynamic competition is vital for achieving effective sports coaching. However, a quantitative evaluation of players in racket sports is difficult because it is derived from the integration of complex tactical and technical (i.e., whole-body movement) performances. In this study, we propose a new evaluation method for racket sports based on deep reinforcement learning, which can analyze the motion of a player in more detail, rather than only considering the results (i.e., scores). Our method uses historical data including information related to the tactical and technical performance of players to learn the next-score probability as a Q-function, which is used to value the actions of the players. We leverages long short-term memory model for the learning of Q-function with the poses of the players and the position of the shuttlecock as the input, which are identified by the AlphaPose and TrackNet algorithms, respectively. We verified our approach by comparing various baselines and demonstrated the effectiveness of our method through use cases that analyze the performance of the top badminton players in world-class events.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.