Abstract
Neuroimaging studies of speech processing increasingly rely on artificial speech-like sounds whose perceptual status as speech or non-speech is assigned by simple subjective judgments; brain activation patterns are interpreted according to these status assignments. The naïve perceptual status of one such stimulus, spectrally-rotated speech (not consciously perceived as speech by naïve subjects), was evaluated in discrimination and forced identification experiments. Discrimination of variation in spectrally-rotated syllables in one group of naïve subjects was strongly related to the pattern of similarities in phonological identification of the same stimuli provided by a second, independent group of naïve subjects, suggesting either that (1) naïve rotated syllable perception involves phonetic-like processing, or (2) that perception is solely based on physical acoustic similarity, and similar sounds are provided with similar phonetic identities. Analysis of acoustic (Euclidean distances of center frequency values of formants) and phonetic similarities in the perception of the vowel portions of the rotated syllables revealed that discrimination was significantly and independently influenced by both acoustic and phonological information. We conclude that simple subjective assessments of artificial speech-like sounds can be misleading, as perception of such sounds may initially and unconsciously utilize speech-like, phonological processing.
Highlights
Recent behavioral and neuroimaging studies comparing speech and non-speech sound processing in the human brain have relied heavily on digitally-manipulated or synthesized sounds with speechlike acoustical properties that are not subjectively reported to be ‘‘speech’’ by listeners [1,2,3,4,5,6,7]
S2), obtained by inverting the speech spectrum around a center frequency, a manipulation that has been used as a nonphonological control stimulus in human neuroimaging studies
Fewer discrimination errors were produced as phonological distances increased (Figure 2a for vowels, 2b for consonants), suggesting a significant relationship between naıvelyperceived phonological attributes of rotated stimuli and nonlinguistic perceptual judgments about them. This does not address the potential role played by physical acoustic distance in these discrimination judgments, since phonologically similar sounds are acoustically similar
Summary
Recent behavioral and neuroimaging studies comparing speech and non-speech sound processing in the human brain have relied heavily on digitally-manipulated or synthesized sounds with speechlike acoustical properties that are not subjectively reported to be ‘‘speech’’ by listeners [1,2,3,4,5,6,7]. If the pattern of perceptual differences significantly resembles the pattern of phonological distances, there are two possible explanations: either variation in rotated syllables is naıvely perceived in a phonetic-like manner to some extent, or perception is solely based on physical acoustic similarity, and two sounds that are acoustically similar will be given similar phonological descriptions. These explanations can be tested by quantifying the acoustic distance relations among stimulus sounds and examining whether phonological and acoustic distances independently account for variation in discrimination performance. Vowels were selected because consonants vary widely in their meaningful acoustic features–for example, high-frequency bursts of PLoS ONE | www.plosone.org
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.