Abstract

A general‐purpose computer‐graphics package has been implemented for displaying animated simulations of a variable range of facial topographies, shapes, and articulatory gestures in realtime, at a rate of 50 frames per second. The topographies can now include features, like the teeth, which may be only intermittently visible. Animation is achieved by supplying streams of time‐varying positional data, presently obtained from measurements of talkers speaking VCV or CVC utterances, but strategies for synthesis by interpolating between successive target configurations will also be discussed. The package can generate (a) spatially and temporally graded continua of stimuli, including stimuli in which the movements of normally related articulators are deliberately decoupled, (b) different subsets of articulators, and (c) different talkers. Experience gained from a prototypical identification experiment using /aCV/ utterances has been used to develop a refined facial model which is being applied to study the relative importance of lip and teeth movements in the perception of nondiphthongal vowels. The results will be reported. [Work supported by MRC.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.