Abstract

Context plays a critical role in human speech processing. Rapid, efficient integration of context and input is accomplished by interactive processing: bottom-up input information and top-down context information work together to constrain the percept. One domain where this is most clear is the effects of word context on perception of speech sounds. Behavioral experiments and computational model simulations show that interactive processing confers both benefits (robust perception in noisy environments and tuning in response to changes in input patterns) and costs (errors and delays in perception when the input is inconsistent with the context). Context effects are also prevalent is the constraining effect of global communication context on activation of different meanings of ambiguous words (i.e., homophones): when only highly imageable meanings are consistent with the context (a word-to-picture matching task), concrete noun meanings (e.g., the tree-related meaning of bark) become more activate than less imageable meanings (e.g., the dog-related meaning of bark). These findings are consistent with an interactive graded constraint satisfaction view of speech perception in which bottom-up input and top-down context simultaneously constrain the final percept.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call