Abstract

From insects to larger mammals, legged animals can be seen easily traversing a wide variety of challenging environments, by carefully selecting, reaching, and exploiting high-quality contacts with the terrain. In contrast, existing robotic foothold planning methods remain computationally expensive, often relying on exhaustive search and/or (often offline) optimization methods, thus limiting their adoption for real-life robotic deployments. In this work, we propose a low-cost, bio-inspired foothold planning method for legged robots, which replicates the mechanism of the central nervous system of legged mammals. We develop a low-level joint-space CPG model along with a high-level vision-based controller that can inexpensively predict future foothold locations and locally optimize them based on a potential field based approach. Specifically, by reasoning about the quality of ground contacts and the robot's stability through the high-level vision-based controller, our CPG model smoothly and iteratively updates relevant locomotion parameters to both optimize foothold locations and body pose, directly in the joint space of the robot for easier implementation. We experimentally validate our control model on a modular hexapod on various locomotion tasks in obstacle-rich environments as well as on stair climbing. Our results show that our method enables stabler and steadier locomotion than an open-loop CPG controller, yielding higher-quality feedback from onboard sensors by minimizing the effect of slippage and unexpected impacts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call