Abstract
Cross-modal integration processes are essential for service robots to reliably perceive relevant parts of the partially known unstructured environment. We demonstrate how multimodal integration on different abstraction levels leads to reasonable behavior that would be difficult to achieve with unimodal approaches. Sensing and acting modalities are composed to multimodal robot skills via a fuzzy multisensor fusion approach. Single modalities constitute basic robot skills that can dynamically be composed to appropriate behavior by symbolic planning. Furthermore, multimodal integration is exploited to answer relevant queries about the partially known environment. All these approaches are successfully implemented and tested on our mobile service robot platform TASER.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.