Abstract

Locomotion in animals is characterized as a stable, rhythmic behavior which at the same time is flexible and extremely adaptive. Many motor control approaches have taken considerable steps taking insights from biology. As one example, the Walknet approach for six-legged robots realizes a decentralized and modular structure that reflects insights from walking in stick insects. While this approach can deal with a variety of disturbances during locomotion, it is still limited dealing with novel and particular challenging walking situations. This has lead to a cognitive expansion that allows to test behaviors outside their original context and search for a solution in a form of internal simulation. What is still missing in this approach is the variation of lower level motor primitives themselves to cope with difficult situation and any form of learning. Here, we propose how this biologically-inspired approach can be extended in order to include a form of trial-and-error learning. The realization is currently underway and is based on a more broad formulation as a hierarchical reinforcement learning problem. Importantly, the structure of the hierarchy follows the decentralized organization taken from insects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.