Abstract

Understanding spoken language is an exceptional computational achievement of the human cognitive apparatus. Theories of how humans recognize spoken words fall into two categories: Some theories assume a fully bottom-up flow of information, in which successively more abstract representations are computed. Other theories, in contrast, assert that activation of a more abstract representation (e.g., a word) can affect the activation of smaller units (e.g., phonemes or syllables). The two experimental conditions reported here demonstrate the top-down influence of word representations on the activation of smaller perceptual units. The results show that perceptual processes are not strictly bottom-up: Computations at logically lower levels of processing are affected by computations at logically more abstract levels. These results constrain and inform theories of the architecture of human perceptual processing of speech.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.