This paper presents AMFANet, an advanced deep learning model engineered for high-quality image style transfer. AMFANet integrates cutting-edge techniques such as the Adaptive Multi-Scale Feature Fusion (AMSF) module and Hybrid Attention Mechanism (HAM) to significantly improve style consistency, content fidelity, and texture preservation. The model also utilizes Segmented Atrous Spatial Pyramid Pooling (SASPP) for effective multi-scale feature extraction. Comprehensive experimental evaluations demonstrate that AMFANet surpasses current state-of-the-art models like StyleGAN3, ChipGAN, ACL-GAN, and CycleGAN in generating high-fidelity stylized images while preserving intricate details and artistic essence. Future research will focus on optimizing computational efficiency, enabling multi-style transfer, enhancing user interaction, and exploring cross-domain applications. These findings highlight AMFANet’s potential as a robust solution for advanced image style transfer in both artistic and practical domains.
Read full abstract