Abstract

The correlation filter (CF) achieves excellent performance, showing high robustness to motion blur or illumination change by learning filters. However, tracking in challenging scenarios with occlusion or out-of-view is still not well resolved. In the scenario of occlusion, the background information is mixed into the image patch to learn the filter, which causes the filter to learn the background. To alleviate this problem, we improve CF trackers by proposing the subspace reconstruction based CF (SRBCF) tracker. In our method, the original image patch for learning filters is replaced by a reconstructed patch when the appearance of the object dramatically changes, such as occlusion or disappearance, so that the filter can learn from the object instead of the background. We construct the subspace with image patches of the searching window in previous frames. To track the changes in the subspace and mitigate the adverse effects of outliers on the subspace during the tracking process, we improve a dynamic L1-PCA method to construct and update the subspace with a slightly extra computational cost. Our method can be embedded in various correlation filter trackers, such as STAPLE and KCF. Extensive experiments on the OTB-100 dataset, UAV123, DTB70, and Temple Color Pure dataset (we removed 49 sequences repeated in OTB-100) validate the effectiveness of our method. The maximum AUC increase reaches 11.2% for the baseline method on DTB70.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call