Abstract

The flexible artistic creation of painting using deep neural networks has attracted lots of attention recently. Existing image-to-image translation approaches show powerful capabilities in producing photos between various domains. However, little attention has been paid to reference - based and line-based tasks simultaneously. In this paper, we introduce a novel network, RefFaceNet, to synthesize face portraits by utilizing reference face photos and line art drawings that only consist of the outlines of principal facial components. Our model can provide those people who have no experience in painting with the freedom to create. We utilize two separate encoders to focus on learning better feature representations for the line domain and the reference domain. An Attention-based Face Transfer Module composed of several sub-modules is built to capture the spatial correspondence between the two types of features. To construct a more robust encoding ability for our generator, we first learn a direct mapping from the line drawings to their ground truth color images and then take some distortions on the reference examples. With the assistance of optimal transport, we further propose to keep Spatial Distortion Consistency between the reference pictures in different geometric shapes by aligning features in high-dimensional space. A series of experiments have been conducted, and the results demonstrate the superiority of our method in generating more pleasing images compared with state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call