Abstract

The mobile robots are equipped with sensitive audio visual sensors, usually microphone arrays and video cameras. They are the main sources of audio visual information to perform suitable mobile robots navigation tasks, modeling the human audio visual perception. The results from the audio and visual perception algorithms are widely used, separate or in conjunction (audio visual perception) in the mobile robots navigation, for example to control mobile robots motion in applications like people and objects tracking, surveillance systems, etc. The effectiveness and precision of the audio visual perception methods in the mobile robots navigation can be enhanced combining audio visual perception with audio visual attention. Sufficient relative knowledge exists, describing the phenomena of human audio and visual attention. Such approaches are usually based on a lot of physiological, psychological, medical and technical experimental investigations relating the human audio and visual attention, with the human audio and visual perception with the leading role of the brain activity. Of course, the results from these investigations are very important, but not sufficient for the mobile robots audio visual attention modeling, mainly because of brain missing in mobile robots audio visual perception systems. Therefore, in this chapter is proposed to use the existing definitions and models for human audio and visual attention, adapting them to the models of mobile robots audio and visual attention and combining with the results from the mobile robots audio and visual perception in the mobile robots navigation tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call