Abstract
Visual tracking aims to estimate object state in a video sequence, which is challenging when facing drastic appearance changes. Most existing trackers conduct tracking with divided parts to handle appearance variations. However, these trackers commonly divide target objects into regular patches by a hand-designed splitting way, which is too coarse to align object parts well. Besides, a fixed part detector is difficult to partition targets with arbitrary categories and deformations. To address the above issues, we propose a novel adaptive part mining tracker (APMT) for robust tracking via a transformer architecture, including an object representation encoder, an adaptive part mining decoder, and an object state estimation decoder. The proposed APMT enjoys several merits. First, in the object representation encoder, object representation is learned by distinguishing target object from background regions. Second, in the adaptive part mining decoder, we introduce multiple part prototypes to adaptively capture target parts through cross-attention mechanisms for arbitrary categories and deformations. Third, in the object state estimation decoder, we propose two novel strategies to effectively handle appearance variations and distractors. Extensive experimental results demonstrate that our APMT achieves promising results with high FPS. Notably, our tracker is ranked the first place in the VOT-STb2022 challenge.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Pattern Analysis and Machine Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.