Abstract

In this paper, we present a simple and fast supervised learning framework based on model predictive control so as to learn motion controllers for a physic-based character to track given example motions. The proposed framework is composed of two components: training data generation and offline learning. Given an example motion, the former component stochastically controls the character motion with an optimal controller while repeatedly updating the controller for tracking the example motion through model predictive control over a time window from the current state of the character to a near future state. The repeated update of the optimal controller and the stochastic control make it possible to effectively explore various states that the character may have while mimicking the example motion and collect useful training data for supervised learning. Once all the training data is generated, the latter component normalizes the data to remove the disparity for magnitude and units inherent in the data and trains an artificial neural network with a simple architecture for a controller. The experimental results for walking and running motions demonstrate how effectively and fast the proposed framework produces physics-based motion controllers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call