In the evolving landscape of deep learning technologies, the emergence of Deepfakes and synthetic media is becoming increasingly prominent within digital media production. This research addresses the limitations inherent in existing face image generation algorithms based on Generative Adversarial Networks (GAN), particularly the challenges of domain irrelevancy and inadequate facial detail representation. The study introduces an enhanced face image generation algorithm, aiming to refine the CycleGAN framework. The enhancement involves a two-fold strategy: firstly, the generator's architecture is refined through the integration of an attention mechanism and adaptive residual blocks, enabling the extraction of more nuanced facial features. Secondly, the discriminator's accuracy in distinguishing real from synthetic images is improved by incorporating a relative loss concept into the loss function. Additionally, this study presents a novel model training approach that incorporates age constraints, thereby mitigating the effects of age variations on the synthesized images. The effectiveness of the proposed algorithm is empirically validated through comparative analysis with existing methodologies, utilizing the CelebA dataset. The results demonstrate that the proposed algorithm significantly enhances the realism of generated face images, outperforming current methods in terms of Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM), while also achieving notable improvements in subjective visual quality. The implementation of this advanced method is anticipated to substantially elevate the efficiency and quality of digital media production, contributing positively to the broader field of digital media creation.