Abstract

DeepFaceLab, which is also known as AI face swap, is a new task with practical value in computer vision field that has become popular in recent years. As it can replace any face in an existing video with another face whichever you want without revealing flaws, swapping face technology is now becoming more and more welcomed by the entertainment, film and art industries, producing high commercial value. A number of works based on convolutional neural network or generative adversary network have been proposed to extract facial features and accomplish face swap. However, their network architecture can't complete the small face swapping very well and the effect of the generated video is also not ideal. In this paper, we train on the SAEHD model based on a single shot scale-invariant face detector. We use the scale-equitable-face detection framework to ensure that we can extract enough features on different scales for face swap. In addition, according to the effective receptive field, the anchors of different scales are defined on different feature maps, and we adopt the proportional sampling method to ensure that the sampling density of the anchors on different feature maps is consistent. By replacing the face by frame, we have achieved very good results on the DeepFaceLab task, achieving relatively small source and destination loss, respectively, with the real-time speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.