Abstract

Walking is an extremely challenging problem due to its dynamically unstable nature. It is further complicated by the high dimensional continuous state and action spaces. We use locally weighted projection regression (LWPR) as a locally structurally adaptive nonlinear function approximator as the basis for learned control policies. Empirical evidence suggests that control policies for high dimensional problems exist on low dimensional manifolds. The LWPR algorithm models this manifold in a computationally efficient manner as it only models those states which are visited using a local dimensionality reduction technique based on partial least squares regression. We show that local models are capable of learning control policies for physicsbased simulations of planar bipedal walking. Locally structured control policies are learned from observation of a variety of different inputs including observation of human control and existing parametrized control policies. We extend the pose control graph to the concept of policy control graph and show that this representation allows for the learning of transition points between different control policies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call