Abstract

In this paper, we present a robust 3D human-head tracking method. 3D head positions are essential for robots interacting with people. Natural interaction behaviors such as making eye contacts require head positions. Past researches with laser range finder (LRF) have been successful in tracking 2D human position with high accuracy in real time. However, LRF trackers cannot track multiple 3D head positions. On the other hand, trackers with multi-viewpoint images can obtain 3D head position. However, vision-based trackers generally lack robustness and scalability, especially in open environments where lightening conditions vary by time. To achieve 3D robust real-time tracking, here we propose a new method that combines LRF tracker and multi-camera tracker. We combine the results from trackers using the LRF results as maintenance information toward multi-camera tracker. Through an experiment in a real environment, we show that our method outperforms toward existing methods, both in its robustness and scalability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call