Abstract

Feature point tracking deals with image streams that change over time. Most existing feature point tracking algorithms only consider two adjacent frames at a time, and forget the feature information of previous frames. In this paper, we present a new eigenspace-based tracking method that learns an eigenspace representation of training features online, and finds the target feature point with Gauss-Newton style search method. A coarse-to-fine processing strategy is introduced to handle large affine transformations. Several simulations and experiments on real images indicate the effectiveness of the proposed feature tracking algorithm under the conditions of large pose changes and temporary occlusions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call