Abstract

<italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Human-like</i> trajectory generation and footstep planning represent challenging problems in humanoid robotics. Recently, research in computer graphics investigated machine-learning methods for character animation based on training human-like models directly on motion capture data. Such methods proved effective in virtual environments, mainly focusing on trajectory visualization. This letter presents ADHERENT, a system architecture integrating machine-learning methods used in computer graphics with whole-body control methods employed in robotics to generate and stabilize <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">human-like</i> trajectories for humanoid robots. Leveraging human motion capture locomotion data, ADHERENT yields a general footstep planner, including forward, sideways, and backward walking trajectories that blend smoothly from one to another. Furthermore, at the joint configuration level, ADHERENT computes data-driven whole-body postural reference trajectories coherent with the generated footsteps, thus increasing the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">human likeness</i> of the resulting robot motion. Extensive validations of the proposed architecture are presented with both simulations and real experiments on the iCub humanoid robot, thus demonstrating ADHERENT to be robust to varying step sizes and walking speeds.

Highlights

  • T HE complexity of the problem of generating trajectories for humanoid robots increases considerably when targeting real-time trajectory generation for different environmental conditions and robot locomotion modes

  • We present ADHERENT, a comprehensive learning-based architecture for efficient human-like wholebody trajectory generation and control of humanoid robots, and validate its robustness with extensive simulations and real-world experiments on the iCub humanoid robot

  • We propose the following approach for kinematically-feasible base motion retargeting: 1) The contact point Ipc is identified as the lowest among the 8 vertices of the feet’s rectangular approximations; 2) The retargeted base orientation IRB is directly retrieved from the motion capture (MoCap) data; 3) The retargeted base position Ipb is computed by forward kinematics from Ipc, constrained to remain fixed between two consecutive retargeting steps, via Ipb = Ipc + IRCCpb, where C is the frame attached to the contact point and Cpb is the base position, expressed in C, computed by forward kinematics in the updated joint configuration returned by the latest Whole-Body Geometric Retargeting (WBGR) iteration

Read more

Summary

Introduction

T HE complexity of the problem of generating trajectories for humanoid robots increases considerably when targeting real-time trajectory generation for different environmental conditions and robot locomotion modes. B. Whole-Body Geometric Retargeting Among the various approaches to human motion retargeting (see, e.g., [25], [26], [27], [28]), Whole-Body Geometric Retargeting (WBGR) is a recent method adaptable to different robot models and human subjects [29]. Assuming a degree of topological similarity between the human’s and robot’s mechanical structures, WBGR uses m correspondences between frames associated with m human and robot links at a reference configuration. Given the human link orientations IRHi , i ∈ 1, ..., m to be retargeted onto the robot, WBGR allows to retrieve the robot joint angles by solving the inverse kinematics and Automation Letters

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call