Abstract

Virtual constraint-based gait control can acquire a variety of gait motions depending on the virtual constraints provided. However, this method suffers from footstep constraints. This is because the center of mass (CoM) motion cannot be predicted analytically. Therefore, the method requires a numerical solution of the forward problem, which is vulnerable to errors. To solve the issue, we propose a footstep planning method using model-based reinforcement learning in virtual constraint-based walking. In the proposed method, model predictive control (MPC) evaluates the stability of each step while adjusting the footstep and manipulating the conserved quantities. To simplify the optimal control problem, passive dynamic autonomous control (PDAC), which compresses the CoM motion to the lowest dimension, is employed for walking control. The entire transition model to predict the future is decomposed into three segments in order to improve the learning speed by utilizing the gait phases knowledge. The three decomposed models and a stability-cost, which evaluates the footstep stability, are trained with ensemble learning for reducing modeling error and efficient exploration. Simulation results showed the proposed method achieved nearly twice higher goal achievement rate than the simplified baseline. Furthermore, the proposed method successfully maintains more than 70 % constraints even on constrained environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call