Abstract

Sound is essential to enhance visual experience and human-robot interaction, but most research and development efforts are usually made mainly towards sound generation, speech synthesis and speech recognition. The reason why only little attention has been paid to auditory scene analysis is that real-time perception of a mixture of sounds is difficult. Recently, Nakadai et al. have developed real-time auditory and visual multiple-talker tracking technology. In this paper, this technology is applied to human—robot verbal and non-verbal interaction including a receptionist robot and a companion robot at a party. The system includes face identification, speech recognition, focus-of-attention control and a sensorimotor task in tracking multiple talkers. The system is implemented on an upper-torso humanoid called SIG and the talker tracking is attained by distributed processing on three nodes connected by a 100Base-TX network. The overall delay of tracking is 200 ms. Focus-of-attention is controlled by associating auditory and visual streams with using the sound source direction and talker position as a clue. Once an association is established, the humanoid keeps its face towards the direction of the associated talker.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call