Abstract

Anticipatory eye movements are reported in studies on motor control as well as on language comprehension, implying that this major orienting system is involved in generating goal-directed behaviour within the action and the language domain. The cognitive contribution of these anticipatory eye movements to language and motor control, however, is still not well understood. This study investigated whether anticipatory eye movements reflect the working of a predictive mechanism that is shared between action and language and if so, whether the predictions are based primarily on an anticipation of the next discrete event (movement or word), or rather represent a semantic understanding of the end goal of the whole event (action or sentence). To this end, we designed two highly comparable paradigms with complex action sequences – one relying more strongly on the action and the other on the language system. The data demonstrated a pattern of predictive looks in our action observation paradigm that was similar to that observed in the visual world paradigm. These findings provide empirical evidence for the idea of a shared predictive mechanism that allows for fluent behaviour in action and language. Moreover, the pattern in both paradigms was such that it demonstrated an increase in predictive looks in the final action step. This finding implies that the predictive mechanism accumulates semantic information relevant for our overall (motor or linguistic) behavioural goals, rather than just predicting discrete events when making decisions about complex action sequences. Such a predictive mechanism facilitates understanding of complex situations, allowing for efficient and adaptive interaction with our environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call