Abstract
Motion is the main characteristic of intelligent mobile robots. There exist a lot of methods and algorithms for mobile robots motion control. These methods are based on different principles, but the results from these methods must leads to one final goal—to provide a precise mobile robot motion control with clear orientation in the area of robot perception and observation. First, in the proposed chapter the mobile robot audio and visual systems with the corresponding audio (microphone array) and video (mono, stereo or thermo cameras) sensors, accompanied with laser rangefinder sensor, are outlined. The audio and video information captured from the sensors is used in the perception audio visual model proposed to perform joint processing of audio visual information and to determine the current mobile robot position (current space coordinates) in the area of robot perception and observation. The captured from audio visual sensors information is estimated with the suitable algorithms developed for speech and image quality estimation to apply the preprocessing methods for increasing the quality and to minimizing the errors of mobile robot position calculations. The current space coordinates determined from laser rangefinder are used as supplementary information of mobile robot position, for error calculation and for comparison with the results from audio visual mobile robot motion control. In the development of the mobile robot perception audio visual model, some methods are used: method RANdom SAmple Consensus (RANSAC) for estimation of parameters of a mathematical model from a set of observed audio visual coordinate data; method Direction Of Arrival (DOA) for sound source direction localization with microphone array of speaker sending voice commands to the mobile robot; method for speech recognition of the voice command sending from the speaker to the robot. The current mobile robot position calculated from joint usage of perceived audio visual information is used in appropriate algorithms for mobile robot navigation, motion control, and objects tracking: map based or map less methods, path planning and obstacle avoidance, Simultaneous Localization And Mapping (SLAM), data fusion, etc. The error, accuracy, and precision of the proposed mobile robot motion control with perception of audio visual information are analyzed and estimated from the results of the numerous experimental tests presented at the end of this chapter. The experiments are carried out mainly with simulations of the algorithms listed above, but are trying also parallel computing methods in implementation of the developed algorithms to reach real time robot navigation and motion control using perceived audio visual information from the mobile robot audio visual sensors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.