Abstract

Deep auto-encoder (DAE) models have been successfully used in object tracking due to its strong capability of feature representation. However, single deep auto-encoder model would not be robust enough to represent the appearance model of outdoor vehicle for its harsh working environment, such as illumination variation, occlusion, cluttered background and so on. In this paper, a novel multiple-DAE-based tracking approach, that is, classifier adaptive fusion for robust outdoor vehicle visual tracking approach is proposed under particle filter framework. Firstly, two deep auto-encoders are offline trained by gray-scale image and gradient image of the raw training images, respectively to obtain the stronger feature representation of gray-scale image and gradient image. Secondly, two classifiers are constructed using the encoder of the two well-trained deep auto-encoders and the output of the each classifier is used to compute the confidence of the corresponding particles. Finally, the confidence output of the two classifiers is fused and applied in online tracking, where, the fusion weight of the each classifier is computed according to the distribution of particles represented by different classifier. Extensive tracking experiments conducted on visual tracking benchmark (VTB) show that the proposed tracking algorithm outperforms 9 popular tracking algorithms in the challenge scenes of outdoor vehicle tracking such as illumination variation, occlusion, cluttered background and scale variation.

Highlights

  • Video object tracking is an important research issue in computer vision

  • The fusion weight of each deep learning model can be automatically calculated according to the distribution of the particles represented by themselves

  • Several comparative tracking experiments are conducted on the visual tracking benchmark (VTB) platform to evaluate quantitatively and qualitatively the tracking performance

Read more

Summary

INTRODUCTION

Video object tracking is an important research issue in computer vision. It has been widely used in intelligent transportation system (ITS) for obtaining the state information of the outdoor vehicle. A new multi-deep learning model fusion method is proposed, which fuses the results of the classifier trained with gray image and the classifier trained with gradient image to achieve multiple model information complementation for solving the outdoor vehicle tracking problem under the challenging environmental such as illumination variation, occlusion, rotation, and fast motion. In order to further improve the tracking accuracy, in our previous work, considering that the response of each neuron in the neural network for visual information is sparse, we proposed a robust outdoor vehicle tracking method based on k-sparse stacked denoising autoencoder in reference [17] In this method, k-sparse restriction is introduced into the classification neural network to learn the invariant features of the input image, thereby to enhance the ability of the network to represent the object appearance model.

GENERIC FEATURE REPRESENTATION
EXPERIMENTAL TESTING
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.