Abstract

Recent works based on deep learning and facial priors have performed well in superresolving severely degraded facial images. However, due to the limitation of illumination, pixels of the monitoring probe itself, focusing area, and human motion, the face image is usually blurred or even deformed. To address this problem, we properly propose Face Restoration Generative Adversarial Networks to improve the resolution and restore the details of the blurred face. They include the Head Pose Estimation Network, Postural Transformer Network, and Face Generative Adversarial Networks. In this paper, we employ the following: (i) Swish-B activation function that is used in Face Generative Adversarial Networks to accelerate the convergence speed of the cross-entropy cost function, (ii) a special prejudgment monitor that is added to improve the accuracy of the discriminant, and (iii) the modified Postural Transformer Network that is used with 3D face reconstruction network to correct faces at different expression pose angles. Our method improves the resolution of face image and performs well in image restoration. We demonstrate how our method can produce high-quality faces, and it is superior to the most advanced methods on the reconstruction task of blind faces for in-the-wild images; especially, our 8 × SR SSIM and PSNR are, respectively, 0.078 and 1.16 higher than FSRNet in AFLW.

Highlights

  • Image generation has attracted broad attention in recent years

  • To train the process of Head Pose Estimation Network (HPENet), we introduce a subnetwork to supervise the training of the network model. e subnetwork only works in the training phase. e subnetwork estimates the 3D Euler angle for the input face sample, and its ground truth is estimated by the information of key points in the training data. e purpose of the network is to supervise and assist the training convergence, mainly to serve the critical point detection network. e input of the subnetwork is not the training data, but the intermediate output of the HPENet

  • On the basis of qualitative comparison, the quantitative comparison is further carried out to evaluate the ability of face restoration and the quality of synthetic images in Section 4.4. e peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) indices are adopted for quantitatively evaluating the related state of the art

Read more

Summary

Introduction

Image generation has attracted broad attention in recent years Within these works [1,2,3], synthesizing a face from different angles while retaining identity is an important task, because of its wide range of industrial applications, such as video monitoring and face analysis. This task has been greatly advanced by a number of models of Generative Adversarial Networks. Ere are plenty of models [13, 14], such as CycleGAN which is a well-known face reconstruction method with data-driven GAN Generative adversarial networks have recently demonstrated excellence in image editing [9,10,11], which shows great potential in producing realistic images [12]; many modified generative adversarial network models are used to generate real face images. ere are plenty of models [13, 14], such as CycleGAN which is a well-known face reconstruction method with data-driven GAN

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call