Abstract

Falling is inevitable for legged robots in challenging real-world scenarios, where environments are unstructured and situations are unpredictable, such as uneven terrain in the wild. Hence, to recover from falls and achieve all-terrain traversability, it is essential for intelligent robots to possess the complex motor skills required to resume operation. To go beyond the limitation of handcrafted control, we investigated a deep reinforcement learning approach to learn generalized feedback-control policies for fall recovery that are robust to external disturbances. We proposed a design guideline for selecting key states for initialization, including a comparison to the random state initialization. The proposed learning-based pipeline is applicable to different robot models and their corner cases, including both small-/large-size bipeds and quadrupeds. Further, we show that the learned fall recovery policies are hardware-feasible and can be implemented on real robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call