Abstract

Across a wide range of tasks, vision appears to process input best when it is spatially rather than temporally distributed, whereas audition is the opposite. Here we explored whether such modality constraints also affect implicit statistical learning in an artificial grammar learning task. Participants were exposed to statistically governed input sequences and then tested on their ability to classify novel items. We explored three types of presentation formats—visual input distributed spatially, visual input distributed temporally, auditory input distributed temporally—and two rates of presentation: moderate (4 elements/second) and fast (8 elements/second). Overall, learning abilities were best for visual-spatial and auditory input. Additionally, at the faster presentation rate, performance declined only for the visual-temporal condition. Finally, auditory learning was mediated by increased sensitivity to the endings of input sequences, whereas vision was most sensitive to the beginnings of sequences. These results suggest that statistical learning for sequential and spatial patterns proceeds differently across the visual and auditory modalities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call