Abstract
Synthesizing sketches from facial photos is of great significance to digital entertainment. Along with higher demands on sketch quality in a complex environment, however, it has been an urgent issue on how to synthesize realistic sketches with the limited training data. The existing face sketch methods pay less attention to the insufficient problem of the training data, leading to the synthesized sketches with some noise or without some identity-specific information in real-world applications. Target on providing sufficient photo-sketch pairs to meet the demand of users in digital entertainment, we present a cross-domain face sketch synthesis framework in this paper. In the photo-sketch mixed domain, we leverage the generative adversarial network to construct a cross-domain mapping function and generate identity-preserving face sketches as the hidden training data. Combined it with the insufficient original training data, we provide sufficient training data to recover the underlying structures and learn the cross-domain transfer of the high-level qualitative knowledge from the photo domain to the sketch domain by the latent low-rank representation. The qualitative and quantitative evaluations on the public facial photo-sketch database demonstrate that the proposed cross-domain face sketch synthesis method can solve the insufficient problem of the training data successfully. And it outperforms other state-of-the-art works and generates more vivid and cleaner facial sketches.
Highlights
Face sketch synthesis technique has drawn considerable interest in entertainment [1]
To recover an underly structure and learn a cross-domain transfer of high-level quality knowledge from the photo domain to the sketch domain, we introduce the hidden data to a low-rank representation (LRR) to obtain a latent lowrank representation (LLRR)
The proposed cross-domain approach is superior to the current face sketch synthesis methods in two aspects
Summary
Face sketch synthesis technique has drawn considerable interest in entertainment [1]. The dataset for training synthesis model is limited In most cases, it includes only facial photos and sketches in the frontal view under the normal lighting environment. With the insufficient training pairs, it is hard to recover the underlying structure or construct the mapping model by the existing face sketch. Target to synthesize face sketches when training data are insufficient, we present a cross-domain synthesis framework. To build sufficient training data, we learn a nonlinear crossdomain mapping relationship in the photo-sketch mixed domain by the generative adversarial networks (GAN). The cross-domain mapping function is transferred from the training data to the test data and the hidden sketches preserving the characteristics of the test photos are generated. The proposed cross-domain approach is superior to the current face sketch synthesis methods in two aspects.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.