Simulating the outcome of double eyelid surgery is a challenging task. Many existing approaches rely on complex and time-consuming 3D digital models to reconstruct facial features for simulating facial plastic surgery outcomes. Some recent research performed a simple affine transformation approach based on 2D images to simulate double eyelid surgery outcomes. However, these methods have faced challenges, such as generating unnatural simulation outcomes and requiring manual removal of masks from images. To address these issues, we have pioneered the use of an unsupervised generative model to generate post-operative double eyelid images. Firstly, we created a dataset involving pre- and post-operative 2D images of double eyelid surgery. Secondly, we proposed a novel attention-class activation map module, which was embedded in a generative adversarial model to facilitate translating a single eyelid image to a double eyelid image. This innovative module enables the generator to selectively focus on the eyelid region that differentiates between the source and target domain, while enhancing the discriminator's ability to discern differences between real and generated images. Finally, we have adjusted the adversarial consistency loss to guide the generator in preserving essential features from the source image and eliminating any masks when generating the double eyelid image. Experimental results have demonstrated the superiority of our approach over existing state-of-the-art techniques.
Read full abstract