Abstract

Recently, deep-learning based face super-resolution methods have succeeded in hallucinating high-resolution face images from low-resolution inputs. However, since the given face images have tiny resolution and arbitrary characteristics which need to be reconstructed at high magnification factors, existing methods still suffer from large space of possible mapping functions, resulting in the limited performance in producing sharp textures. In this paper, we propose a novel CNN-based Dual Closed-loop Network (DCLNet) to minimize the possible mapping space. To that end, we design two dual learning networks to form the dual closed-loop structure with a primary face super-resolution network, which can provide the primary branch with additional prior constraint to guide the essential facial features restoring. Our work represents the first attempt to introduce multiple dual learning networks into face super-resolution model to constrain the possible mapping space. In addition, a progressive facial prior estimation framework and a new prior-guided feature enhancement module are presented to integrate the facial prior knowledge and guide the face image super-resolution. By generating multiple facial components maps for the activation of essential facial parts, our enhancement module can address the difficulty in learning and integrating strong priors into a face super-resolution model. In this way, the collaboration between the face super-resolution and alignment processes can be enhanced with effect. Extensive experiments are implemented on CelebA and Helen dataset, showing that our proposed method can provide state-of-the-art or even better performance in both quantitative and qualitative measurements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call