Video object tracking has been a popular area in the field of computer vision. As video data evolves, more special perspectives and challenging video data are constantly kept up to date. This poses challenges for object tracking tasks and places higher demands on the generalization capabilities of the models. In this paper, we propose a novel quantum evolutionary learning tracker for video. The model combines quantum evolution with deep networks for tracking video objects. The model uses a quantum evolutionary learning tracker to generate a reliable population of candidate regions and a deep network for classification. In particular, the quantum evolutionary predictor predicts the object motion state through rotation operator and trajectory inference, and provides motion state information for the tracker. The predictor can incorporate object history contextual information and can provide stable candidate estimation populations for the model in case of failure of appearance features. Both quantum evolution and deep networks are combined to form an end-to-end online video object tracker. In addition, we propose a new video object tracking evaluation algorithm, Balanced Intersection over Union. The evaluation algorithm uses aspect ratios to balance the share of overlap and distance. Finally, we test the model on the OTB 2015 dataset for natural video and on the SV248A10-SOT dataset for satellite video. The performance of the proposed model is also analyzed and validated by comparing it with more than twenty classical tracker models. The experimental results show that our model has high generalization ability and robustness.
Read full abstract