Abstract

We present an approach on fall detection and recovery perturbation during humanoid robot swinging. Reinforcement learning ( ${Q}$ -learning) is employed to explore relationship between actions and states that allow the robot to trigger a reaction to avoid falling. A self-organizing map (SOM) is employed using a circular topological neighborhood function to transform continuous exteroceptive information of the robot during stable swinging into a discrete representation of states. We take advantage of the SOM clustering and topology preservation in the perturbation detection. Swinging and recovery actions are generated from the same neural model using a multilayered multipattern central pattern generator. Experiments, which were carried out in the simulation and on the real humanoid robot (NAO), show that our approach allows humanoid robots to recover from pushing successfully by learning to switch from a rhythmic to an appropriate nonrhythmic behavior.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.