Abstract

The influence of emotions on viewers is important for that it can judge the value of multimedia materials. However, most of the existing image transfer algorithms do not take the affect information embedded in the image into consideration. An emotion is not only depended on colors or a single image. However, previous works in emotion transfer were limited to colors or a single image and shows no more benefits from more images. Besides, some models proposed for image emotion transfer are deterministic, since they can only generate unimodal output for a given input image once the models are trained. This paper proposes a new emotion-based image transfer algorithm named as Emotional Generative Adversarial Network (EGAN) to deal with these issues. To take benefits from more images, this paper utilizes adversarial training to generalize more emotional features. These generalized features are more objective and comprehensive compared with previous works. To achieve diverse outputs of emotional transfer, a new disentangled strategy is proposed, which assumes that the emotional images from different emotion domains can be embedded into a shared latent neutral high-level concept (NHC) space and a shared latent emotional low-level feature (ELF) space. By combining the NHC representation embedded from the input image with the ELF representation embedded from the target emotional reference image, the model can reconstruct a transferred target-emotion image. Therefore, diverse outputs can be obtained by selecting different reference images. Two quantitative measures are presented to evaluate the effect of image emotion transfer. Experimental results show that the proposed method performs better than other state-of-the-art baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call