Abstract

We propose a deep neural network based image-to-image translation for domain adaptation, which aims at finding translations between image domains. Despite recent GAN based methods showing promising results in image-to-image translation, they are prone to fail at preserving semantic information and maintaining image details during translation, which reduces their practicality on tasks such as facial expression synthesis. In this paper, we learn a framework with two training objectives: first, we propose a multi-domain image synthesis model, yielding a better recognition performance compared to other GAN based methods, with a focus on the data augmentation process; second, we explore the use of domain adaptation to transform the visual appearance of the images from different domains, with the detail of face characteristics (e.g., identity) well preserved. Doing so, the expression recognition model learned from the source domain can be generalized to the translated images from target domain, without the need for re-training a model for new target domain. Extensive experiments demonstrate that ExprADA shows significant improvements in facial expression recognition accuracy compared to state-of-the-art domain adaptation methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.