Abstract

Facial appearance transfer (FAT) is a critical component of various facial editing tasks. It aims to transfer the facial appearance of a reference into a target with good visual consistency. When there are considerable visual differences between a reference and a target, however, it may introduce visual artifacts into the results. To tackle this problem, we propose a facial appearance map with illumination-aware and region-aware properties that allows seamless FAT. We formulate the appearance-map generation as label propagation (LP) on a similarity graph, and propose a new regularization structure to facilitate the adaptive appearance-map diffusion. Solving the original LP model of appearance map in general requires on the order $O(kn^2)$ time for an $n$ -nodes graph where each node has $k$ neighbors. It may be computationally prohibitive for an image with a large spatial resolution. To tackle this problem, we mathematically analyze the graph-based LP model and propose a fast algorithm with smart subset sampling. It selects a subset with $m$ nodes of the graph with $n$ nodes ( $m\ll n$ ) to approximate the solution to the original system, which significantly reduces its computational requirements from $O(kn^2)$ to $O(m^2n)$ . Based on the adaptive LP-based appearance map, we construct a framework to achieve various editing effects with FAT, including face replacement, face dubbing, face swapping, and transfiguring. Comparisons with related methods show the effectiveness of the adaptive LP model for FAT. Qualitative and quantitative evaluations verify the computational improvements of the approximation algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call