Abstract

Visual tracking remains a challenging research problem because of appearance variations of the object over time, changing cluttered background and requirement for real-time speed. In this paper, we investigate the problem of real-time accurate tracking in a instance-level tracking-by-verification mechanism. We propose a multi-stream deep similarity learning network to learn a similarity comparison model purely off-line. Our loss function encourages the distance between a positive patch and the background patches to be larger than that between the positive patch and the target template. Then, the learned model is directly used to determine the patch in each frame that is most distinctive to the background context and similar to the target template. Within the learned feature space, even if the distance between positive patches becomes large caused by the interference of background clutter, impact from hard distractors from the same class or the appearance change of the target, our method can still distinguish the target robustly using the relative distance. Besides, we also propose a complete framework considering the recovery from failures and the template updating to further improve the tracking performance without taking too much computing resource. Experiments on visual tracking benchmarks show the effectiveness of the proposed tracker when comparing with several recent real-time-speed trackers as well as trackers already included in the benchmarks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call