Abstract
A recently developed model of speech production [Story & Bunton, JASA, 146(4), 2522–2528] was used to generate VCVs that were examined with regard to both articulation and identification of the consonant. In this model, an utterance is generated by specifying relative acoustic events along a time axis. These events consist of directional changes of the vocal tract resonance frequencies called resonance deflection patterns (RDPs) that, when associated with a temporal event function, are transformed via acoustic sensitivity functions, into time-varying modulations of the vocal tract shape. RDPs specifying /b/, /d/, and /g/ would typically be coded as [−1 −1 −1], [−1 1 1], and [−1 1 −1], respectively, indicating, from left to right, the targeted directional shift of the first, second, and third resonances of the vocal tract. In this study, two types of V1CV2 continua were constructed in three vowel contexts (/i, a, u/) by incrementing in small steps (1) the second resonance deflection from −1 to 1, and (2) the third resonance deflection from 1 to −1. The resulting time-varying vocal tract shapes emulate expected articulation patterns for the stop consonants, and a perceptual experiment indicated that listeners identify the consonants based on the polarity of RDP values.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.