Abstract

Machines and robots that have the ability to solve an everincreasing number of tasks will continue to be integrated into our everyday lives. What is still lacking for a real breakthrough is a suitable degree of fl exibility and adaptability that will allow a cognitive robot to deal with dynamically changing environments and situations that cannot be designed a priori. In this paper, we review a series of "how-case cognitive robots" in which sensors measure distance, contact , or visual data to provide a suitable input for emergent behaviors that will provide solutions to specific goals. In most of our simulations or experiments we created models that follow biological principles found in insects. First, we review two paradigms for animal locomotion: the Central Pattern Generator (CPG) and a refl ex-based approach. We show how simple, contact sensors are able to provide effi cient feedback for sophisticated and adaptive locomotion strategies. Next, we show how some simple (lower level) sensors can be used to train more complex (higher level) ones with data (which are initially nothing more than clusters of pixels) that associate "meanings" to visual details by using bioinspired processing algorithms. The last example shows the emergence of cognitive schemes in spatial-temporal nonlinear neural lattices that are induced by sensory events.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.