Abstract
Bringing together a Riemannian geometry account of visual space with a complementary account of human movement synergies we present a neurally-feasible computational formulation of visuomotor task performance. This cohesive geometric theory addresses inherent nonlinear complications underlying the match between a visual goal and an optimal action to achieve that goal: (i) the warped geometry of visual space causes the position, size, outline, curvature, velocity and acceleration of images to change with changes in the place and orientation of the head, (ii) the relationship between head place and body posture is ill-defined, and (iii) mass-inertia loads on muscles vary with body configuration and affect the planning of minimum-effort movement. We describe a partitioned visuospatial memory consisting of the warped posture-and-place-encoded images of the environment, including images of visible body parts. We depict synergies as low-dimensional submanifolds embedded in the warped posture-and-place manifold of the body. A task-appropriate synergy corresponds to a submanifold containing those postures and places that match the posture-and-place-encoded visual images that encompass the required visual goal. We set out a reinforcement learning process that tunes an error-reducing association memory network to minimize any mismatch, thereby coupling visual goals with compatible movement synergies. A simulation of a two-degrees-of-freedom arm illustrates that, despite warping of both visual space and posture space, there exists a smooth one-to-one and onto invertible mapping between vision and proprioception.
Highlights
While there is much evidence that natural behaviour is organized into a chain of multisensory goals and that a series of small discrete movements are planned and strung together into a continuous sequence to achieve those goals, we do not yet have a formal mathematical theory of the underlying neural computational processing involved
Section 5: We describe the Riemannian geometry of minimum-effort movement synergies for visual tasks with N ≤ 10 control degrees of freedom (CDOFs)
Section 6: Here we present the Riemannian geometry of proprioception-to-vision and vision-to-proprioception maps taking into account redundancy between the many elemental movements of the body sensed proprioceptively and the three dimensions of visual space
Summary
While there is much evidence that natural behaviour is organized into a chain of multisensory goals and that a series of small discrete movements are planned and strung together into a continuous sequence to achieve those goals, we do not yet have a formal mathematical theory of the underlying neural computational processing involved. The mathematical theory presented here concerning the selection and sequencing of minimum-effort, multi-joint, coordinated movements compatible with visual goals has been developed with awareness of the many issues outlined above Likewise it has been developed cognizant of other theoretical models that seek to understand how the many biomechanical and muscular degrees of freedom (DOFs) of the human body are coordinated to achieve a specific goal. In this paper we combine our previous separate applications of Riemannian geometry to action [5] and to vision [6] to develop a Riemannian geometry theory of computational processes required in the planning and execution of minimum-effort visually-guided movement synergies to achieve specified visual goals. In particular we relate Riemannian geometry to work on motor synergies, optical flow, and dissociation of perception and action in illusions
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have