Abstract
We introduce a neuro-fuzzy system for localising mobile robots solely based on raw vision data without relying on landmarks or artificial symbols. In an initial learning step the system is trained on the compressed input data so as to classify different situations and to associate appropriate behaviours to these situations. Input data may, for example, be generated by an omnidirectional vision system obviating the need for active cameras. At run time the compressed input data are fed into different B-spline fuzzy controllers which determine the correspondence between the actual situation and the situation they were trained for. The matching controller may then directly drive the actuators to realise the desired behaviour. The system thus realises a tight coupling between a very high-dimensional input parameter space and the robot actuators. The algorithms are straightforward to implement and the computational effort is much lower than with conventional vision systems. Experimental results validate the approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.