Abstract

We investigated the perception and categorization of speech (vowels, syllables) and non-speech (tones, tonal contours) stimuli using MEG. In a delayed-match-to-sample paradigm, participants listened to two sounds and decided if they sounded exactly the same or different (auditory discrimination, AUD), or if they belonged to the same or different categories (category discrimination, CAT). Stimuli across the two conditions were identical; the category definitions for each kind of sound were learned in a training session before recording. MEG data were analyzed using an induced wavelet transform method to investigate task-related differences in time–frequency patterns. In auditory cortex, for both AUD and CAT conditions, an alpha (8–13 Hz) band activation enhancement during the delay period was found for all stimulus types. A clear difference between AUD and CAT conditions was observed for the non-speech stimuli in auditory areas and for both speech and non-speech stimuli in frontal areas. The results suggest that alpha band activation in auditory areas is related to both working memory and categorization for new non-speech stimuli. The fact that the dissociation between speech and non-speech occurred in auditory areas, but not frontal areas, points to different categorization mechanisms and networks for newly learned (non-speech) and natural (speech) categories.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call