Abstract

Particle filter algorithms are a very important branch for visual object tracking in the past decades, showing strong robustness to challenging scenarios with partial occlusion and large-scale variations. However, since a large number of particles need to be extracted for the accurate target state estimation, their tracking efficiency typically suffers especially when meeting deep convolutional features, which have been developed for handling significant variations of the target appearance in the visual tracking community. In this paper, we propose to elegantly exploit deep convolutional features with few particles in a novel hierarchical particle filter, which formulates correlation filters as observation models and breaks the standard particle filter framework down into two constituent particle layers, namely, particle translation layer and particle scale layer. The particle translation layer focuses on the object location with the deep convolutional features capturing semantics but failing to precisely estimate the object scale, while the particle scale layer pays attention to large-scale variations with the lightweight hand-crafted features handling spatial details of the object size. Moreover, an efficient ensemble method is proposed to help explore deeper convolutional features with more semantics in the particle translation layer. Extensive experiments on four challenging tracking datasets, including OTB-2013, OTB-2015, VOT2014, and VOT2015 demonstrate that the proposed method performs favorably against a number of state-of-the-art trackers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call