Abstract

This study proposes a novel robust video tracking algorithm consists of target detection, multi-feature fusion, and extended Camshift. Firstly, a novel target detection method that integrates Canny edge operator, three-frame difference, and improved Gaussian mixture model (IGMM)-based background modelling is provided to detect targets. The IGMM-based background modelling divides video frames into meshes to avoid pixel-wise processing. In addition, the output of the target detection is utilised to initialise the IGMM and to accelerate the convergence of iterations. Secondly, low-dimensional regional covariance matrices are introduced to describe video targets by fusing multiple features like pixel location, colour index, rotation and scale invariant features as well as uniform local binary patterns, and directional derivatives. Thirdly, an extended Camshift based on adaptive kernel bandwidth and robust H∞ state estimation is proposed to predict the states of fast moving targets and to reduce the mean shift iterations. Finally, the effectiveness of the proposed tracking algorithm is demonstrated via experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.