Abstract
In the literature, numerous techniques have proposed to enhance the performance of tracking the visual objects and each method has its own merits and demerits. For instance, the existing tracking methods may lack in performance due to external disturbances that include background clutter, occlusion, and scale variations. In this article, we propose a multi-expert tracking framework that exploits feature fusion and contextual information of the target to improve the tracking accuracy and robustness. Specifically, we constitute an expert group by ensembling the features extracted from deep convolutional neural networks with different properties. Besides, each expert belonging to the constituted group helps to track target in all frames and the best expert with maximum robustness score is selected in each frame. Then, the contextual information of the target is introduced into the correlation filter to improve performance under complex interference. In addition, to further improve efficiency, more experts can be generated by fusing different type of features which leads to more robustness. Moreover, an adaptive model update strategy is introduced into the correlation filter to discriminate the unreliable samples effectively. Finally, extensive experimental results on OTB2013, OTB2015, TempleColor128 and UAVDT datasets demonstrate that the proposed method performs favourably against state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.