Abstract

Visual target tracking is an important topic in the field of vision research. Its main task is to automatically process and analyze the input video information by computer, so as to obtain the position and key motion information of the target in the input video sequence, and to facilitate the subsequent research and analysis of the track and behavior of the tracking target. At present, researchers in the field of visual target tracking have proposed a series of excellent algorithms and frameworks. However, in view of the problems of deformation, occlusion, illumination change and random motion in the process of target tracking, it is still a great challenge to ensure the real-time, accuracy and robustness of tracking algorithms. In recent years, with the excellent performance of depth neural network in many tasks, the fusion of depth features based on relevant filtering algorithms to achieve target tracking tasks has become a hot research topic. In this paper, an improved target tracking algorithm is proposed based on the framework of classical target tracking methods: (1) Integrating deeper ResNet-101 deep convolution neural network model to detect target features more accurately. (2) Improve the model training method to save the network training cost. Finally, the performance of the proposed tracking algorithm is tested on the OTB target tracking standard dataset, and compared with other mainstream algorithms. The results show that the improved method proposed in this paper can effectively improve the tracking effectiveness and robustness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.