Abstract

ABSTRACT Shared attention across individuals is a crucial component of joint activities, modulating how we perceive relevant information. In this study, we explored shared attention in language production and memory across separate representation levels. In a shared go/no-go task, pairs of participants responded to objects displayed on a screen: One participant reacted according to the animacy of the object (semantic task), while her partner reacted to the first letter/phoneme (phoneme-monitoring task). Objects could require a response from either one participant, both participants or nobody. Only participants assigned to the phoneme-monitoring task were faster at responding to the joint than to alone trials. However, results from a memory recall test showed that for both partners recall was more accurate for those items to which the partner responded and for jointly responded items. Overall, our findings suggest that partners co-represent each other’s language features even when they do not engage in the same task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.