Abstract

Deep learning based face parsing methods have attained state-of-the-art performance in recent years. Their superior performance heavily depends on the large-scale annotated training data. However, it is expensive and time-consuming to construct a large-scale pixel-level manually annotated dataset for face parsing. To alleviate this issue, we propose a novel Dual-Structure Disentangling Variational Generation (D2VG) network. Benefiting from the interpretable factorized latent disentanglement in VAE, D2VG can learn a joint structural distribution of facial image and its corresponding parsing map. Owing to these, it can synthesize large-scale paired face images and parsing maps from a standard Gaussian distribution. Then, we adopt both manually annotated and synthesized data to train a face parsing model in a supervised way. Since there are inaccurate pixel-level labels in synthesized parsing maps, we introduce a coarseness-tolerant learning algorithm, to effectively handle these noisy or uncertain labels. In this way, we can significantly boost the performance of face parsing. Extensive quantitative and qualitative results on HELEN, CelebAMask-HQ and LaPa demonstrate the superiority of our methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call