Abstract

Facial expressional image synthesis is an important technique in human–computer interaction, but it is also a difficult task, especially to make various realistic expressive face images with a flexible control mechanism. In this paper, a novel parameter driven method of synthesizing realistic comprehensive expressional image is proposed. In this method a kernel-based bi-factor factorization model is adopted to decompose two influence factors, identity and expression, and their interaction from a small training database. Therefore the facial expressional images in the training database can be covered by corresponding identity and expression vectors and their interaction matrix. Comprehensive expression image can be manipulated by means of linear combination of these basis expression vectors in a flexible manner. In order to make this trained model be capable of producing realistic expressional images of any person outside the training set, expression ratio image (ERI) and relative shape description are joined into our model to enhance the expressive power of our method. Experimental results show that realistic facial expressional image can be synthesized successfully from only one picture of someone who is quite different from persons in the training database, and controlled by a parameter vector efficiently and effectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.