Abstract
A method for automatic audiovisual transcription of speech employing: acoustic, electromagnetical articulography and visual speech representations is developed. It adopts a combining of audio and visual modalities, which provide a synergy effect in terms of speech recognition accuracy. To establish a robust solution, basic research concerning the relation between the allophonic variation of speech, i.e., the changes in the articulatory setting of speech organs for the same phoneme produced in different phonetic environments and the objective signal parameters (both audio and video) is carried out. The method is sensitive to minute allophonic detail as well as to accentual differences. It is shown that by using the analysis of video signals together with the acoustic signal, speech transcription can be performed more accurately and robustly than by using the acoustic modality alone. In particular, various features extracted from the visual signal are tested for their abilities to encode allophonic variations in pronunciation. New methods for modeling the accentual and allophonic variation of speech are developed. [Research sponsored by the Polish National Science Centre, Dec. No. 2015/17/B/ST6/01874.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.