Abstract

Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target’s orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing ‘contextual cueing’. This effect was enhanced in the multisensory session—importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift–diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone.

Highlights

  • Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions

  • The predictive tactile distractors were presented at participants’ finger locations while they searched for a visual odd-one-out target in a co-located, homogeneous visual array

  • In the current study, we examined the impact of multisensory experiences on statistical context learning in a visual search task

Read more

Summary

Introduction

In half of the trials, the spatial arrangements of the distractor and target stimuli were repeated (i.e., ‘old’ contexts), whereas in the other half, the distractor locations were generated anew on each trial (i.e., non-repeated, ‘new’ contexts) They observed that visual search was facilitated for old, as compared to new, contexts—an effect termed “contextual cueing”. In addition to the generally slower processing of the touch-defined target and distractor stimuli, tests using hand-gesture manipulations, such as flipped hands, revealed that tactile-to-visual contextual cueing is mediated by an environmental reference frame This suggests that part of the tactile lead time is required for the tactile item configuration to be remapped from an initially somatotopically sensed ­format[14] into a common (visual) spatial representation for crossmodal contextual facilitation to ­occur[13,15]. Perceptual learning is more likely to involve distributed processing of information from multiple sensory modalities for regularities that eventually come to be stored in long-term (in our case: context) memory, and this may be the case even if only one modality is

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.