Abstract

Trajectory optimization has been an important approach in biomechanics for the analysis and prediction of the limb movement. Such approaches have paved the way for the motion planning of biped and quadruped robots as well. Most of these methods are deterministic, utilizing first-order iterative gradient-based algorithms incorporating the constrained differentiable objective functions. However, the limitation of prevailing methods concerning differentiability hinders the implementation of non-differentiable objective functions such as metabolic energy expenditure (MEE) function, which is highly relevant for physiological systems and can even be implemented across the muscular space. This paper consolidates the implementation of the prevalent direct collocation-based optimal control method with the stochastic trajectory optimization method based on Policy Improvement with Path Integral (PI2) for comprehending the human sit-to-stand (STS) motion. PI2 method, which utilizes reinforcement learning of Dynamic Movement Primitive (DMP) to learn a goal-based trajectory is implemented and validated by comparing with the experimental result in joint-space and muscle-space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call