Abstract

Though many deep-learning-based trackers for visual object tracking have achieved state-of-the-art performance on multiple benchmarks, they still suffer from significant variations in object appearance and loss of the object. To capture variations of the object appearance, this article proposes a template matching network for object tracking, where deep reinforcement learning is introduced to learn how to update the template. Specifically, the template updating problem is modeled to a Markov decision process where the proximal policy optimization (PPO) algorithm is applied to learn the policy of updating the current template. The resultant template updating policy not only considers the variations of the object but also estimates the influence of current updating for the following frames. To further handle the sudden loss of the object, a two-class redetection discriminator is proposed to conclude whether the object is lost or not. If the object is believed to be lost, a global redetection will be launched to locate the target. Experimentally, the proposed method is compared with some representative methods on dataset OTB2015, and experimental results show that our method can get competitive performance on both accuracy and frame speed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call