Abstract
I In order to obtain the color and texture features of a target in a poor illumination environment and simultaneously realize accurate and fast object tracking, an improved object tracking method with visible and infrared image fusion based on a depth convolution network is proposed. Firstly, the fusion method for visible and infrared images is studied. Then, the target tracking framework based on the Siamese network is established, for extracting, the convolution features of the target template and current tracking target. Meanwhile, during the convolution process, an erase unit is added to decrease the influence of irrelevant information caused by the zero-fill operation. Finally, the target's position in the current frame image is calculated by the depth cross-correlation module. In experiments, the performance of the proposed algorithm is tested using the standard dataset VOT2020RGBT and also is compared with a variety of excellent algorithms such as JMMAC algorithm, mfDiMP algorithm, and CISRDCF algorithm proposed in recent years. The results show that the tracking accuracy increased from 0.612 to 0.692, from 0.668 to 0.672, and from 0.652 to 0.686 in the fast-moving, lighting variable, and target occlusion scenarios, respectively. Experimental results show that the algorithm outperforms the above algorithms in complex environments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.