Abstract

In this article, we apply a bio-inspired control architecture to a roving robot performing different tasks. The key of the control system is the perceptual core, where heterogeneous information coming from sensors is merged to build an internal portrait representing the current situation of the environment. The internal representation triggers an action as the response to the current stimuli, closing the loop between the agent and the external world. The robot's internal state is implemented through a nonlinear lattice of neuron cells, allowing the generation of a large amount of emergent steady-state solutions in the form of Turing patterns. These are incrementally shaped, through learning, so as to constitute a “mirror” of the environmental conditions. Reaction—diffusion cellular nonlinear networks were chosen to generate Turing patterns as internal representations of the robot surroundings. The associations between incoming sensations and the perceptual core, and between Turing patterns and actions to be performed, are driven by two reward-based learning mechanisms. We report on simulation results and experiments on a roving robot to show the suitability of the approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.