Abstract

Adaptively learning the difference between object and background, discriminative trackers are able to overcome the complex background problem in visual object tracking. However, they are not robust enough to handle the out-of-plane rotation of object, which reduces recall performance. Meanwhile, allowing individual parts certain criterion of freedom, part-based trackers can better handle the out-of-plane rotation problem. However, they are prone to be affected by complex background, leading to low precision performance. To simultaneously address both issues, we propose a collaborative strategy that makes mutual enhancement between a discriminative tracker and a part-based tracker possible to obtain better overall performance. On one hand, we use validated results from the part-based tracker to update the discriminative tracker for recall performance improvement. On the other hand, based on confident results from the discriminative tracker we adaptively update the part-based tracker for simultaneous precision performance improvement. Experiments on various challenge sequences show that our approach achieved the state-of-the-art performance, which demonstrated the effectiveness of mutual collaboration between the two trackers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call