Abstract

Estimates of limb posture are critical for controlling robotic systems. This is generally accomplished with angle sensors at individual joints that simplify control but can complicate mechanical design and robustness. Limb posture should be derivable from each joint's actuator shaft angle but this is problematic for compliant tendon-driven systems where (i) motors are not placed at the joints and (ii) nonlinear tendon stiffness decouples the relationship between motor and joint angles. Here we propose a novel machine learning algorithm to accurately estimate joint posture during dynamic tasks by limited training of an artificial neural network (ANN) receiving motor angles and tendon tensions, analogous to biological muscle and tendon mechanoreceptors. Simulating an inverted pendulum—antagonistically-driven by motors and nonlinearly-elastic tendons—we compare how accurately ANNs estimate joint angles when trained with different sets of non-collocated sensory information generated via random motor-babbling. Cross-validating with new movements, we find that ANNs trained with motor angles and tendon tension data predict joint angles more accurately than ANNs trained without tendon tension. Furthermore, these results are robust to changes in network/mechanical hyper-parameters. We conclude that regardless of the tendon properties, actuator behavior, or movement demands, tendon tension information invariably improves joint angle estimates from non-collocated sensory signals.

Highlights

  • What are the control mechanisms by which Nature masters versatile limb movements that robots have yet to learn? Even the smallest of creatures can quickly and expertly learn to control their limbs for a variety of tasks, yet robots struggle to match this level of performance and generalizability

  • When motor babbling duration was varied between 1 and 25 s, we found that artificial neural network (ANN) trained with tendon tension data, provided more accurate joint angle estimates across all babbling durations and generalization movements (Figure 7)

  • ANNs trained with the Bio-Inspired Set performed as well as those trained with the set of All Available Data

Read more

Summary

Introduction

What are the control mechanisms by which Nature masters versatile limb movements that robots have yet to learn? Even the smallest of creatures can quickly and expertly learn to control their limbs for a variety of tasks, yet robots struggle to match this level of performance and generalizability. In the absence of visual feedback, this is generally accomplished by placing sensors directly on joints This increases limb inertia, complicates mechanical design, can be an additional source of noise, and risks damage (Marjaninejad et al, 2019b,c). These adverse effects become more pronounced for slender or deformable limbs (e.g., fingers in tendon-driven robotic hands) where it may be impossible or impractical to sensorize the joints.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call