Deep generative models allow the synthesis of realistic human faces from freehand sketches or semantic maps. However, although they are flexible, sketches and semantic maps provide too much freedom for manipulation, and thus, are not easy for novice users to control. In this study, we present DeepFaceReshaping, a novel landmark-based deep generative framework for interactive face reshaping. To edit the shape of a face realistically by manipulating a small number of face landmarks, we employ neural shape deformation to reshape individual face components. Furthermore, we propose a novel Transformer-based partial refinement network to synthesize the reshaped face components conditioned on the edited landmarks, and fuse the components to generate the entire face using a local-to-global approach. In this manner, we limit possible reshaping effects within a feasible component-based face space. Thus, our interface is intuitive even for novice users, as confirmed by a user study. Our experiments demonstrate that our method outperforms traditional warping-based approaches and recent deep generative techniques.
Read full abstract