Abstract

We investigated the real-time cascade of postural, visual, and manual actions for object prehension in 38 6- to 12-month-old infants (all independent sitters) and eight adults. Participants' task was to retrieve a target as they spun past it at different speeds on a motorized chair. A head-mounted eye tracker recorded visual actions and video captured postural and manual actions. Prehension played out in a coordinated sequence of postural-visual-manual behaviors starting with turning thehead and trunk to bring the toy into view, which in turn instigated the start of the reach. Visually fixating the toy to locate its position guided the hand for toy contact and retrieval. Prehension performance decreased at faster speeds, but quick planning and implementation of actions predicted better performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call