Abstract

Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1–9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

Highlights

  • Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning

  • The point of subjective equality (PSE) had a systematic shift depending on number magnitude, with large numbers having the left-most PSE, medium numbers were intermediate, and the PSE for small numbers was farthest to the right

  • Small numbers had added space perceived as being to the left of midline, which pushed the judgment of midline to the right

Read more

Summary

Introduction

Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. In speech recognition the meaning of sounds is decoded by mapping acoustic features onto lexical knowledge stored in long-term memory[1] This basic problem of relating perceptual information to long-term memory is evident in understanding visual object recognition, face recognition, and reading[2,3,4]. The Hansen et al (2006), Olkkonen et al (2008), and Shepard & Jordan (1984) studies all examined long-term memory information of features directly related to the perceptual task used for testing (i.e. color or musical pitch judgments). A potential concern with that approach is subject demand, because the direct relation between the task and lexical information may provide a subtle bias for subjects to meet experimental expectations We took this approach a step further by probing the structure of long-term memory using an indirect, potentially implicit, relation between long-term memory for numbers and spatial perception. Convergent evidence from behavioral[14,15,16,17,18,19,20], lesion[21], and neuroimaging[22] studies suggests a relation between number magnitude www.nature.com/scientificreports/

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call