Abstract

Pruning the parameters of basis filters can effectively eliminate the negative effect of redundant deep features in discriminative correlation filter based trackers. However, traditional methods often treat feature maps in Convolutional Neural Networks (CNN) as isolate observations, ignore the intrinsic correlation between partially attentional feature maps in multiple convolutional layers, when basis filter pruning is pursued. In this letter, we propose a multi-layer factorized discriminant correlation filter (MLF-DCF) for visual tracking. By integrating the multi-view discriminant learning and the discriminative correlation filter into a unified optimization problem, we can explore the correlation between different target sub-regions from multi-layer viewpoint, thus can effectively prune multi-layer basis filters. To enhance the efficiency of MLF-DCF in terms of speed and accuracy, we not only adopt alternating direction method of multipliers (ADMM) to solve the unified optimization, but also employ a mask estimation strategy to eliminate the background noise in deep features. A large number of experiments on challenging video sequences are given to illustrate the superiority of our tracking method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call