The rise of comics and games has led to increased artistic processing of portrait photos. With growing commercial demand and advancements in deep learning, neural networks for rapid facial style transfer have become a key research area in computer vision. This involves converting face photos into different styles while preserving content. Face images are more complex than regular images, requiring extensive modification. However, current methods often face issues such as unnatural color transitions, loss of detail in highlighted areas, and noticeable artifacts along edges, resulting in low-quality stylized images. In this study, an enhanced generative adversarial network (GAN) is proposed, which is based on Adaptive Layer Instance Normalization (AdaLIN) + Laplacian. This approach incorporates the AdaLIN normalization method, allowing for the dynamic adjustment of Instance Normalization (IN) and Layer Normalization (LN) parameters’ weights during training. By combining the strengths of both normalization techniques, the model selectively preserves and alters content information to some extent, aiming to strike a balance between style and content. This helps address problems such as unnatural color transitions and loss of details in highlights that lead to color inconsistencies. Furthermore, the introduction of a Laplacian regularization term aids in denoising the image, preventing noise features from interfering with the color transfer process. This regularization also helps reduce color artifacts along the face’s edges caused by noise while maintaining the image’s contour information. These enhancements significantly enhance the quality of the generated face images. To compare our method with traditional CycleGAN and recent algorithms such as XGAN and CariGAN, both subjective and objective evaluations were conducted. Subjectively, our method demonstrates more natural color transitions and superior artifact elimination, achieving higher scores in Mean Opinion Score (MOS) evaluations. Objectively, experiments using our method yielded better scores across three metrics: FID, SSIM, and MS-SSIM. The effectiveness of the proposed methods is validated through both objective and subjective evaluations.