Abstract

Face verification between ID photos and life photos (FVBIL) is gaining traction with the rapid development of the Internet. However, ID photos provided by the Chinese administration center are often corrupted with wavy lines to prevent misuse, which poses great difficulty to accurate FVBIL. Therefore, this paper tries to improve the verification performance by studying a new problem, i.e. blind face inpainting, where we aim at restoring clean face images from the corrupted ID photos. The term blind indicates that the locations of corruptions are not known in advance. We formulate blind face inpainting as a joint detection and reconstruction problem. A multi-task ConvNet is accordingly developed to facilitate end to end network training for accurate and fast inpainting. The ConvNet is used to (i) regress the residual values between the clean/corrupted ID photo pairs and (ii) predict the positions of residual regions. Moreover, to achieve better inpainting results, we employ a skip connection to fuse information in the intermediate layer. To enable training of our ConvNet, we collect a dataset of synthetic clean/corrupted ID photo pairs with 500 thousand samples from around 10 thousand individuals. Experiments demonstrate that our multi-task ConvNet achieves superior performance in terms of reconstruction errors, convergence speed and verification accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.