Abstract
Two experiments examined perceptual colocation of visual and tactile stimuli in young infants. Experiment 1 compared 4- (n=15) and 6-month-old (n=12) infants' visual preferences for visual-tactile stimulus pairs presented across the same or different feet. The 4- and 6-month-olds showed, respectively, preferences for colocated and noncolocated conditions, demonstrating sensitivity to visual-tactile colocation on their feet. This extends previous findings of visual-tactile perceptual colocation on the hands in older infants. Control conditions excluded the possibility that both 6- (Experiment 1), and 4-month-olds (Experiment 2, n=12) perceived colocation on the basis of an undifferentiated supramodal coding of spatial distance between stimuli. Bimodal perception of visual-tactile colocation is available by 4months of age, that is, prior to the development of skilled reaching.
Highlights
Two experiments examined perceptual colocation of visual and tactile stimuli in young infants
Arriving in the outside world, the newborn infant has to determine how their tactile spatial representations formed in utero relate to the much richer and generally more distant spatial environment newly offered up by hearing, olfaction, and vision. How do they make sense of this multitude of sensory inputs, learning which stimuli to attribute to common environmental events or objects and which to segregate (e.g., Kording et al, 2007; Rohe & Noppeney, 2015)? In this article, we report the findings of a study designed to determine whether young human infants can solve one aspect of this crossmodal binding problem
Experiment 1 confirms findings of Freier et al (2016) that 6-month-old infants can reliably distinguish between situations in which visual-tactile stimuli are presented in the same region of space versus when they are presented across different locations
Summary
Two experiments examined perceptual colocation of visual and tactile stimuli in young infants. Arriving in the outside world, the newborn infant has to determine how their tactile spatial representations formed in utero relate to the much richer and generally more distant spatial environment newly offered up by hearing, olfaction, and vision. How do they make sense of this multitude of sensory inputs, learning which stimuli to attribute to common environmental events or objects and which to segregate (e.g., Kording et al, 2007; Rohe & Noppeney, 2015)?
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have