Abstract

Viseme classification and analysis in every language is among the most important preliminaries for conducting various multimedia researches as talking head, lip reading, lip synchronization and computer assisted pronunciation training applications. Viseme classification and analysis is language dependent. For that reason, in different languages and based on the target applications, visemes of a language are classified. Up to date, there has been no such research in Persian language, in that it makes it rather impossible for researches to be conducted in AVSR system or lip synchronization. In this paper, we propose a novel method adopting an image-based approach for grouping visemes in Persian language considering coarticulation effect. For each phoneme, the central frame is selected in several images representing different positions in various syllables. Having obtained eigenlips of each phoneme, we project each viseme on another viseme’s eigenspace. Then the weight value as a result of reconstruction is set as the criterion for comparing viseme similarity. The experimental results indicate an ideal precision and robustness of the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call