Abstract

The effects of statistical learning and congruency on multi-modal binding were examined. As pattern acquisition is stronger for within-object than for between-object associations, extending bias from within-object to within-modality was tested, and the statistical learning effect on between-modality learning assessed. Dyson and Ishfaq‟s (2008) paradigm was adapted, with frequency of within- and between-modality associations manipulated (Experiment 1), and frequency and congruency manipulated (Experiment 2). Each experiment comprised baseline (no predictive value), intra-modal (intramodal predictive value), and inter-modal (intermodal predictive value) conditions. Experiment 1 showed faster performance for within-object judgments, and fewer errors on within-object judgments, excluding the inter-modal condition. Experiment 2 replicated this, with cross-experimental analyses showing weak congruency effects. Data showed probability manipulations led mostly to interference on same-modality trials rather than facilitation on different-modality trials, suggesting while frequency of differentmodality associations did not facilitate superior performance, perhaps expectancies of frequent different-modality associations weakened sensitivity to the within-modality bias.

Highlights

  • Audition is another critical sense required for successful navigation of one's perceptual world, as it provides alerting information and cues for localization of objects, and enables a level of environmental interaction that approaches that of vision (Schiffman, 2001)

  • For Experiment 3, wherein two grammars sharing one dimension in the same modality were presented, correct responding was at 60.0% for one set, and 56.0% for the second set, the latter of which was not significant from responding at chance. This demonstrated that when grammars were presented in one dimension uni-modally, only one of the grammars could be learned, due to the strong perceptual similarities between the two grammars. These results suggest that learning of the statistical structure was stimulus-specific rather than abstract, and, if stimulus attributes of two grammars were distinguishable enough, the grammars could be learned simultaneously; multiple grammars within the same sensory dimension, could not be learned concurrently, as this caused some difficulty for participants

  • There was a marginal effect of modality, F(1, 11) = 4.725, p = .052, ηρ2 = .300, showing responses were slightly faster on same-modality trials (827 ms) than on different-modality trials (868 ms), and a main effect of order, F (1, 11) = 11.864, p < .01, ηρ2 = .519, showing that participants responded faster for second responses (802 ms) than for first responses (893 ms)

Read more

Summary

Introduction

All interactions with the world are experienced through exposure to information sensed through the various modalities of vision, audition, taction, gustation, and olfaction. Vision is undisputed as the dominant sense for humans (Calvert, Spence, & Stein, 2004), given that approximately 70% of all sensory receptors are found in the eye (Pasternak, 2005) and resources for visual processing comprise nearly half of the cerebral cortex (Sereno et al, 1995). Audition is another critical sense required for successful navigation of one's perceptual world, as it provides alerting information and cues for localization of objects, and enables a level of environmental interaction that approaches that of vision (Schiffman, 2001). As we begin to associate information from one domain with information from another, multiple factors are implicated in the process of audiovisual binding, including temporal and spatial contributions (e.g., Calvert, Spence, & Stein, 2004), statistical learning (e.g., Ernst, 2007), and congruency (e.g., Molholm, Ritter, Javitt, & Foxe, 2004)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.