Abstract

Any passive rigid inertial object that we hold in our hand, e.g., a tennis racquet, imposes a field of forces on the arm that depends on limb position, velocity, and acceleration. A fundamental characteristic of this field is that the forces due to acceleration and velocity are linearly separable in the intrinsic coordinates of the limb. In order to learn such dynamics with a collection of basis elements, a control system would generalize correctly and therefore perform optimally if the basis elements that were sensitive to limb velocity were not sensitive to acceleration, and vice versa. However, in the mammalian nervous system proprioceptive sensors like muscle spindles encode a nonlinear combination of all components of limb state, with sensitivity to velocity dominating sensitivity to acceleration. Therefore, limb state in the space of proprioception is not linearly separable despite the fact that this separation is a desirable property of control systems that form models of inertial objects. In building internal models of limb dynamics, does the brain use a representation that is optimal for control of inertial objects, or a representation that is closely tied to how peripheral sensors measure limb state? Here we show that in humans, patterns of generalization of reaching movements in acceleration-dependent fields are strongly inconsistent with basis elements that are optimized for control of inertial objects. Unlike a robot controller that models the dynamics of the natural world and represents velocity and acceleration independently, internal models of dynamics that people learn appear to be rooted in the properties of proprioception, nonlinearly responding to the pattern of muscle activation and representing velocity more strongly than acceleration.

Highlights

  • When we hold rigid objects firmly in our hand, the resulting dynamics of our arm+object is a field of forces that depends on the motion of our limb, i.e., limb position, velocity, and acceleration

  • In building internal models of limb dynamics, does the brain use a representation that is optimal for control of inertial objects, or a representation that is closely tied to how peripheral sensors measure limb state? Here we show that in humans, patterns of generalization of reaching movements in acceleration dependent fields are strongly inconsistent with basis elements that are optimized for control of inertial objects

  • If we wish to build a robot that can learn to reach while firmly holding passive rigid objects, we might rely on a model of inverse dynamics (θ, θ, θ) → τ^ that estimates the forces τ^ that are necessary to achieve a particular desired state θ, θ, θvia a set of basis elements: τ^ = ∑ pigi(θ, θ, θ)

Read more

Summary

Introduction

When we hold rigid objects firmly in our hand, the resulting dynamics of our arm+object is a field of forces that depends on the motion of our limb, i.e., limb position, velocity, and acceleration. In this field a typical reach is expected to experience zero net force in terms of hand velocity This implies that learning to reach in an acceleration dependent field would be impossible with bases that are only sensitive to limb velocity. The coding of scenario 1 predicts that learning of an acceleration field in one direction should strongly generalize to the opposite direction In simple terms this means that if one can represent acceleration independent of velocity, learning to move a mass in one direction will generalize to movements in the opposite direction these two movements involve very different patterns of muscle activation. We performed an experiment to test whether adaptation to an acceleration dependent field generalizes from one direction of movement to the opposite direction

Methods
Experimental setup
Experimental procedure
Results
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call