Abstract

The performance of visual object tracking depends largely on the target appearance model. Benefited from the success of CNN in feature extraction, recent studies have paid much attention to CNN representation learning and feature fusion model. However, the existing feature fusion models ignore the relation between the features of different layers. In this paper, we propose a deep feature fusion model based on the siamese network by considering the connection between feature maps of CNN. To tackle the limitation of different feature map sizes in CNN, we propose to fuse different resolution feature maps by introducing de-convolutional layers in the offline training stage. Specifically, a top-down modulation is adopted for feature fusion. In the tracking stage, a simple matching operation between the fused feature of the examplar and search region is conducted with the learned model, which can maintain the real-time tracking speed. Experimental results show that, the proposed method obtains favorable tracking accuracy against the state-of-the-art trackers with a real-time tracking speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.