Within the broader context of Information and Communications Technology (ICT), the quest for reliable and scalable visual segmentation methods poses significant challenges, particularly in autonomous driving, where real-world scene complexity requires advanced solutions. To address data scarcity and improve segmentation performance, we propose a novel unsupervised domain adaptation (UDA) approach that enhances target domain learning. Our method introduces multiple perturbations consistency, leveraging spatial context within the target domain to improve recognition. By applying perturbations at input and feature levels and using a consistency loss, we enhance contextual learning. Additionally, a weight mapping technique reduces the impact of detrimental source domain information. Experimental results demonstrate that our approach outperforms baseline methods on the GTAV→Cityscapes and SYNTHIA→Cityscapes datasets.