Abstract

Although recently developed trackers have shown excellent performance even when tracking fast moving and shape changing objects with variable scale and orientation, the trackers for the electro-optical targeting systems (EOTS) still suffer from abrupt scene changes due to frequent and fast camera motions by pan-tilt motor control or dynamic distortions in field environments. Conventional context aware (CA) and deep learning based trackers have been studied to tackle these problems, but they have the drawbacks of not fully overcoming the problems and dealing with their computational burden. In this paper, a global motion aware method is proposed to address the fast camera motion issue. The proposed method consists of two modules: (i) a motion detection module, which is based on the change in image entropy value, and (ii) a background tracking module, used to track a set of features in consecutive images to find correspondences between them and estimate global camera movement. A series of experiments is conducted on thermal infrared images, and the results show that the proposed method can significantly improve the robustness of all trackers with a minimal computational overhead. We show that the proposed method can be easily integrated into any visual tracking framework and can be applied to improve the performance of EOTS applications.

Highlights

  • IntroductionThe main task of short term visual tracking is to localize the target in consecutive frames in a video

  • Visual tracking is one of the core problems in computer vision

  • We compare the performance of the proposed method with conventional trackers when applied to an electro-optical targeting systems (EOTS) product, which was mounted on an aircraft where actual fast and complex camera motion occurred

Read more

Summary

Introduction

The main task of short term visual tracking is to localize the target in consecutive frames in a video. The target position in the image can change significantly, which can cause the target to disappear from the fixed size search range, which makes it impossible for the tracker to localize it To address this issue, we propose a method to estimate global camera movement and to use it to move the position of the search range of the tracker. The background tracking (BT) framework estimates the translation of the global camera motion from two consecutive frames f t−1 and f t when the GES trigger (Section 3.1) detects camera motion. The cross-correlation method can be used as a conventional approach to detect the feature of two consecutive image frames and tracking the background.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call