Abstract

Two motion models are proposed to enhance the performance of video object tracking (VOT) algorithms. The first one is a random walk model that captures the randomness of motion patterns. The second one is a data-adaptive vector auto-regressive (VAR) model that exploits more regular motion patterns. The performance of these models is evaluated empirically using real-world datasets. Three real-time publicly available visual object trackers: the normalized cross-correlation (NCC) tracker, the New Scale Adaptive with Multiple Features (NSAMF) tracker, and the correlation filter neural network (CFNet) are modified using each of these two models. The tracking performances are then compared against the original formulation. It is observed that both models of the prior information lead to performance enhancement of all three trackers. This validates the hypothesis that when training videos are available, prior information embodied in the motion models can improve the tracking performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call