Abstract

Tracking humans' position is a useful skill for the coming generation of mobile robot. It is a challenging problem of planning and control in dynamic environment. We propose the omni-directional estimation method of speaker's position using the combination of audio and visual information. Estimation of the position of the sound is carried out to calculate the difference of arrival time from sound source to multi-channel microphones. The robust human template matching on the omni-directional image is employed to combine the result of sound source estimation to realize a highly accurate estimation of speaker's location. In our experiments, the systems were implemented and tested on an omni-directional robot at our laboratory. The results show that we are able to reliably detect and track moving objects in natural environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call