Abstract

Visual object tracking for autonomy of aerial robots could become challenging especially in the presence of target or camera fast motions and long-term occlusions. This paper presents a visual-inertial tracking paradigm by incorporating the camera kinematics states into the visual object tracking pipelines. We gathered a dataset of image sequences with the addition of camera’s position and orientation measurements as well as the object’s position measurement. For the cases of long-term object occlusion, we provide ground-truth boxes derived from mapping the measured object position onto the image frame. A search zone proposal method is developed based on the estimation of object future position represented in the inertial frame and projected back onto the image frame using the camera states. This search zone, which is robust to fast camera/target motions, is fused into the original search zone settings of the base tracker. Also proposed is a measure indicating the confidence of a tracking structure in keeping track of a correct target and reporting the tracking failure in-time. Accordingly, the model updating mechanism of base tracker is modulated to avoid recovering of wrong objects as the target. The proposed modifications are benchmarked on nine visual object tracking algorithms including five state-of-art deep learning structures , namely DiMP, PrDiMP, KYS, ToMP, and MixFormer. Most of the trackers are remarkably improved by the modifications with up to 8% increase in precision. Modified PrDiMP tracker yields the best precision of 68.4%, more than all considered original (and modified) trackers. Source code and dataset are made available online.1

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call