Abstract

Robust and efficient speech perception relies on the interpretation of acoustically-variable phoneme realizations. Some neuroimaging evidence supports a categorization account, in which speech-evoked patterns of brain activity are transformed into categorical representations. Other evidence supports a continuous account, in which subphonemic detail is maintained over time. However, it is not well understood if or how these patterns of brain activity undergo a continuous-to-categorical transformation, nor how task demands may modulate this process. My approach involves applying biologically- and psychologically-plausible linear classifiers to neural activity as measured by magnetoencephalography (MEG). Data came from adult participants who heard isolated, randomized tokens from a /ba/-/da/ speech continuum. In the passive task, their attention was directed elsewhere. In the active task, they categorized each token as “ba” or “da.” I find that classifiers successfully decode “ba” versus “da” perception from the MEG data. But do they perform like humans, with excellent sensitivity to between-category differences and poor sensitivity to within-category differences, or do they respect stimulus identity? And do passive versus active task demands affect these patterns of classifier performance? Given the wealth of information in the MEG signal, we can test these hypotheses over time and space, as perception unfolds across the brain.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.