Abstract

In this paper, we propose a novel controllable sketch-to-image translation framework that allows users to interactively and robustly synthesize and edit face images with hand-drawn sketches. Inspired by the coarse-to-fine painting process of human artists, we propose a novel dilation-based sketch refinement method to refine sketches at varied coarse levels without the need for real sketch training data. We further investigate multi-level refinement that enables users to flexibly define how "reliable" the input sketch should be considered for the final output through a refinement level control parameter, which helps balance between the realism of the output and its structural consistency with the input sketch. It is realized by leveraging scale-aware style transfer to model and adjust the style features of sketches at different coarse levels. Moreover, advanced user controllability in terms of the editing region control, facial attribute editing, and spatially non-uniform refinement is further explored for fine-grained and semantic editing. We demonstrate the effectiveness of the proposed method in terms of visual quality and user controllability through extensive experiments including qualitative and quantitative comparison with state-of-the-art methods, ablation studies and various applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call