Abstract

We propose a reinforcement learning (RL) based technique to detect passes from the video of a soccer match. The detection of passes determines ball possession statistics of a soccer match. A sequence of video frames is mapped to a sequence of states, such as ball with team A or team B or ball not possessed either by team A or B. The agent of RL learns the frame-to-state mapping and the optimal policy to decide the mapping task. We propose a novel reward function by utilizing contextual information of the soccer game in order to help the agent decide the optimal policy. In this context, the advantage of RL is in the integration of a reward system in choosing an action that maps a video frame of a soccer match to one of three possible states. Unlike competing methods, we design the RL model in a way so that explicit identification of team labels of players is not required. We introduce a Deep Recurrent Q-Network (DRQN) to learn the optimal policy. For efficient training of the DRQN, we have proposed de-correlated experience replay (DER), a strategy that selects important experiences based on the correlations of the experiences stored in the replay memory. Experimental results show that at least 5.75% and 2.1% better accuracy are achieved in calculating pass detection and possession statistics, respectively, compared to similar approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call