Abstract

Emerging research has demonstrated that statistical learning is a modality-specific ability governed by domain-general principles. Yet limited research has investigated different forms of statistical learning within modality. This paper explores whether there is one unified statistical learning mechanism within the visual modality, or separate task-specific abilities. To do so, we examined individual differences in spatial and nonspatial conditional and distributional statistical learning. Participants completed four visual statistical learning tasks: conditional spatial, conditional nonspatial, distributional spatial, and distributional nonspatial. Performance on all four tasks significantly correlated with each other, and performance on all tasks accounted for a large portion of the variance across tasks (57%). Interestingly, a portion of the variance of task performance (between 11% and 18%) was also accounted for by performance on each of the individual tasks. Our results suggest that visual statistical learning is the result of the interplay between a unified mechanism for extracting conditional and distributional statistical regularities across time and space, and an individual's ability to extract specific types of regularities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.