Abstract

In visual object tracking methods, improving both the run time and the accuracy in the face of complex situations has always been an important issue. Many complex tracking algorithms, such as part-based algorithms, have better accuracy when facing occlusions, but they have much greater computational complexity. In response to the above problems, this paper proposes a selective part-based correlation filter (SPCF) tracking algorithm with a reinforcement learning to achieve more stable and efficient tracking of targets. First, according to the conditions of the response map of the correlation filter (CF), the entire tracking process is divided into three states: simple environments, complex environments, and harsh environments. Second, this paper uses reinforcement learning to determine the states of frames in different situations to improve the tracking effect of the algorithm. Third, the process of the online selection of states is transformed into a Markov decision process (MDP), where the policy learning of the MDP is achieved by reinforcement learning. Additionally, different strategies are used to track a target in different states: the overall filter is used to increase the speed in simple environments; part-based filters are used to improve the accuracy in complex environments; and in harsh environments where the target completely disappears, a redetection algorithm is used to find the target when it reappears. Finally, the performance of the tracking algorithm is verified on the VOT2018, OTB-2015, and LaSOT datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call