Abstract

Visual object tracking for unmanned aerial vehicles (UAV) is widely used in many fields such as military reconnaissance, search and rescue work, film shooting, and so on. However, the performance of existing methods is still not very satisfactory due to some complex factors including viewpoint changing, background clutters and occlusion. The Siamese trackers, which offer a convenient way of formulating the visual tracking problem as a template matching process, have achieved success in recent visual tracking datasets. Unfortunately, these template match-based trackers cannot adapt well to frequent appearance change in UAV video datasets. To deal with this problem, this paper proposes a template-driven Siamese network (TDSiam), which consists of feature extraction subnetwork, feature fusion subnetwork and bounding box estimation subnetwork. Especially, a template library branch is proposed for the feature extraction subnetwork to adapt to the changeable appearance of the target. In addition, a feature aligned (FA) module is proposed as the core of feature fusion subnetwork, which can fuse information in the form of center alignment. More importantly, a method for occlusion detection is proposed to reduce the noise caused by occlusion. Experiments were conducted on two challenging benchmarks UAV123 and UAV20L, the results verified the more competitive performance of our proposed method compared to the existing algorithms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.