Abstract

We present an approach on fall detection and recovery perturbation during humanoid robot swinging. Reinforcement learning ( ${Q}$ -learning) is employed to explore relationship between actions and states that allow the robot to trigger a reaction to avoid falling. A self-organizing map (SOM) is employed using a circular topological neighborhood function to transform continuous exteroceptive information of the robot during stable swinging into a discrete representation of states. We take advantage of the SOM clustering and topology preservation in the perturbation detection. Swinging and recovery actions are generated from the same neural model using a multilayered multipattern central pattern generator. Experiments, which were carried out in the simulation and on the real humanoid robot (NAO), show that our approach allows humanoid robots to recover from pushing successfully by learning to switch from a rhythmic to an appropriate nonrhythmic behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call