Abstract
Generating good and human-like locomotion or other legged motions for bipedal robots has always been challenging. One of the emerging solutions to this challenge is to use imitation learning. The sources for imitation are mostly state-only demonstrations, so using state-of-the-art Generative Adversarial Imitation Learning (GAIL) with Imitation from Observation (IfO) ability will be an ideal frameworks to use in solving this problem. However, it is often difficult to allow new or complicated movements as the common sources for these frameworks are either expensive to set up or hard to produce satisfactory results without computationally expensive preprocessing, due to accuracy problems. Inspired by how people learn advanced knowledge after acquiring basic understandings of specific subjects, this paper proposes a Motion capture-aided Video Imitation (MoVI) learning framework based on Adversarial Motion Priors (AMP) by combining motion capture data of primary actions like walking with video clips of target motion like running, aiming to create smooth and natural imitation results of the target motion. This framework is able to produce various human-like locomotion by taking the most common and abundant motion capture data with any video clips of motion without the need for expensive datasets or sophisticated preprocessing.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.