Abstract

Abstract. This paper presents an enhanced Cycle-Consistent Adversarial Networks (CycleGAN) model aimed at preserving semantic consistency during image-to-image translation, with a focus on complex tasks such as autonomous driving and scientific simulations. The study's key contribution is the incorporation of a pre-trained semantic segmentation model to preserve important characteristics during translation, such as license plates, traffic signs, and pedestrian structures. By introducing a semantic consistency loss alongside the traditional cycle-consistency loss, the proposed approach ensures that key features are retained, even in challenging scenes. Extensive experiments conducted on the Cityscapes dataset demonstrate the effectiveness in maintaining both visual fidelity and semantic accuracy, significantly improving upon the traditional CycleGAN. This method proves particularly valuable in domains where precision is essential, such as cross-domain image generation for autonomous systems and medical imaging. Future research will focus on optimizing the model for real-time applications and exploring multi-domain frameworks to further enhance its performance in diverse environments. Overall, this study offers an efficient image style-transfer solution for preserving semantic integrity without sacrificing translation accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.