Abstract
Analytical investigations of speech perception in the audio‐visual domain require a visual stimulus that is plausibly lifelike, controllable and well‐specified. A computer package has been developed to produce real‐time animated graphics which simulate the front‐facial topography and articulatory movements of the lips and jaw during VCV speech utterances. It is highly modular and can simulate a wide range of facial features, shapes, and movements. It is currently driven by streams of time‐varying positional data obtained from experimental measurements of human speakers enunciating VCV utterances. The measurements of a series of point coordinates are made from sequential single frames of a videotape recording using a microprocessor‐linked data‐logging device. Corrections are made for the effects of global head and body movements. This is the lowest level of control in a hierarchy whose higher levels could include algorithms for generating the articulatory trajectories by rule from phonetic transcriptions. Although the development of the synthesizer is still at an early stage, the acceptability of its display suggests great potential for use in analytical investigations for which the graphics will eventually by synchronized with an audio‐speech synthesizer. [Work supported by MRC.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.