Abstract

This work investigates the problem of robust vision-based human tracking by a human following robot using point-based features like SURF. The problem is challenging owing to failures arising because of variation in illumination, change in pose, size or scale, camera motion and partial or full occlusion. While point-based features provide robust detection against photometric and geometric distortions, the tracking of these features over subsequent frames becomes difficult as the number of matching points between a pair of images drops quickly with slight variation in target attribute owing to above mentioned variations. The problem of robust human tracking by the robot is solved by proposing a multi-tracker fusion framework that allows one to combine multiple tracker to ensure long term tracking of the target. This fusion framework also allows for creating a dynamic template pool of target features which gets updated over time. The interaction between the first two trackers is used to update the template pool of the target attribute while the last tracker is used to estimate the location of the target in case of full occlusion. The working of the framework is demonstrated by combining a SURF-based mean-shift tracker, an optical-flow tracker and a Kalman filter to provide robust tracking over a long time. The efficacy of the resulting tracker is demonstrated through rigorous testing on a variety of video datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.