Abstract: In order to improve model performance while transferring information from a source domain to an unlabelled target domain, unsupervised domain adaptation is a crucial problem in deep learning. The CycleGAN design, a well-known generative adversarial network architecture, is used in this study to explore the domain adaptation challenge. Improved cross-domain performance is the goal, and both picture style and content adaptation are being focused on. In this study, we explore the use of CycleGAN to modify images from a source domain to more closely reflect the look and feel of a target domain. With no need for paired data, the architecture's built-in capacity to learn domain mappings makes it easier to transfer knowledge between dissimilar domains. By placing a strong emphasis on content and style adaptation, we hope to get beyond the problems of domain shift and distribution mismatch, which will improve generalization and classification accuracy in the target domain. We extensively experiment on benchmark datasets to show the efficacy of the suggested approach. Comparing cross-domain performance to baseline models and other adaption strategies, quantitative evaluation using known measures demonstrates the significant improvement attained. Additionally, qualitative assessments demonstrate the successful alteration of images, demonstrating CycleGAN's competence in adjusting both visual appeal and semantic information. By highlighting the need of simultaneously addressing visual style and content, this study makes a contribution to the field of unsupervised domain adaptation. The outcomes highlight CycleGAN's potential as a formidable domain adaptation tool, opening the door for improved knowledge transfer and performance in real-world settings across several domains