Abstract

Generative Adversarial Networks with style-based generators could successfully synthesize realistic images from input latent code. Moreover, recent studies have revealed that interpretable translations of generated images could be obtained by linearly traversing in the latent space. However, in most existing latent spaces, linear interpolation often leads to ‘spatially entangled modification’ in the manipulation result, which is undesirable in many real-world applications where local editing is required. To solve this problem, we propose to manipulate the latent code in the ‘style space’ and analyze its advantage in achieving spatial disentanglement. Furthermore, we point out the weakness of simply interpolating in the style space and propose ‘Style Intervention’, a lightweight optimization-based algorithm, to further improve the visual fidelity of manipulation results. The performance of our method is verified with the task of attribute editing on high-resolution face images. Both qualitative and quantitative results demonstrate the advantage of image translation in the style space and the effectiveness of our method on both real and synthetic images.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.