Abstract
The recent increasing availability of comprehensive real-time MRI data of the vocal tract and concomitant progress in air-tissue boundary segmentation present novel opportunities for articulatory modeling. PCA-based articulatory models represent vocal tract configurations as weighted linear combinations of articulatory components that characterize vocal tract shaping patterns. Historically, most such models have been developed using data from a single speaker, and their direct application to data from multiple speakers may provide components that are not comparable across speakers. A technique that can address this issue is PARAFAC, which has been applied in the past to analyze data corresponding to tongue contour X-ray tracings of 10 English vowels from 5 speakers. PARAFAC introduces an additional level of weighting of the articulatory components that is constant for, and therefore characteristic of, each speaker. We revisited PARAFAC and ran a successful pilot on real-time MRI air tissue-boundaries of the entire vocal tract from 4 speakers, each uttering two repetitions of a set of 4 Shibboleth sentences, yielding 289 phones per speaker. Application on a much larger and diverse real-time MRI dataset already collected by our team will provide crucial progress toward true cross-speaker articulatory modeling. [Work supported by NIH and NSF.]
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have