Abstract

Tendon-driven systems are ubiquitous in biology and provide considerable advantages for robotic manipulators, but control of these systems is challenging because of the increase in dimensionality and intrinsic nonlinearities. Researchers in biological movement control have suggested that the brain may employ “muscle synergies” to make planning, control, and learning more tractable by expressing the tendon space in a lower dimensional virtual synergistic space. We employ synergies that respect the differing constraints of actuation and sensation, and apply path integral reinforcement learning in the virtual synergistic space as well as the full tendon space. Path integral reinforcement learning has been used successfully on torque-driven systems to learn episodic tasks without using explicit models, which is particularly important for difficult-to-model dynamics like tendon networks and contact transitions. We show that optimizing a small number of trajectories in virtual synergy space can produce comparable performance to optimizing the trajectories of the tendons individually. The six tendons of the index finger and eight tendons of the thumb, each actuating four degrees of joint freedom, are used to slide a switch and turn a knob. The learned control strategies provide a method for discovery of novel task strategies and system phenomena without explicitly modeling the physics of the robot and environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call