Abstract

Tracking arbitrary objects in natural environments is a challenging task in visual computing. A central problem is the need to adapt to changing appearances under strong transformation and occlusion. We propose a tracking framework that utilises the strength of Convolutional Neural Networks to create a robust and adaptive model of the object from training data produced during tracking. An incremental update mechanism provides increased performance and reduces the computational costs for training during tracking, allowing for robust real-time tracking with state-of-the-art performance. Together with optimisations for deploying the framework on humanoid robots and distributed devices, this shows its viability for research in developmental robotics on questions around infant cognition or active exploration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call