Abstract

With the rapid development of deep neural networks in computer vision, style transfer technology has also made significant progress. Cycle-GAN can perform object deformation, style transfer, and image enhancement without one-to-one mapping between source and target domains. In the painting style transfer task, the performance of Cycle-GAN is recognized. In Cycle-GAN, the choice of generator model is crucial, and common backbones are ResNet and U-Net. The ResNet generator retains part of the original features through the jump connection of the residual structure, preventing the image from losing important information, and has the potential to maintain the authenticity of the image. The U-Net generator extracts more features and details through a complex and in-depth network architecture, which has excellent potential for tasks requiring a lot of feature extraction. However, few studies have directly compared their performance differences in the context of Cycle-GAN style transfer tasks. This paper compares and analyzes the effects of U-Net and ResNet generators in Cycle-GAN style transfer from different perspectives. The author discusses their respective advantages and limitations in training processes and the quality of generated images. The author presents quantitative and qualitative analyses based on experimental results, providing references and insights for researchers and practitioners in different scenarios. The research findings indicate that in the artwork style transfer task of Cycle-GAN, the U-Net generator tends to generate excessive details and texture, leading to overly complex transformed images. In contrast, the ResNet generator demonstrates superior performance, generating desired images faster, higher quality, and more natural results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call