Abstract

When observing someone else acting on an object, people implement goal-specific eye movement programs that are driven by their own motor representation of the observed action. Usually, however, we observe people acting in contexts where more objects, different in shape and size, are present. Is our brain able to select the intended target even when there are different objects in the visual scene? And if this is the case, what kind of information does our motor system capitalize on? We recorded eye movements while participants observed an actor reaching for and grasping one of two objects requiring two different kinds of grip to be picked up. In a control condition, the actor merely reached for and touched one of the two objects without preshaping her hand according to the target features. Results showed higher accuracy and earlier saccadic movements when participants observed an actually grasping hand than when they observed a mere reaching hand devoid of any kind of target-related preshaping. This clearly suggests that the hand preshaping provided the observer with enough motor cues to proactively and reliably saccade toward the object to be grasped, thus identifying it even when the action target was not previously known. Our findings strongly corroborate the direct matching hypothesis suggesting that in processing others' actions, we take advantage of the same motor knowledge that enables us to efficiently perform those actions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call