
Within the broader context of Information and Communications Technology (ICT), the quest for reliable and scalable visual segmentation methods poses significant challenges, particularly in autonomous driving, where real-world scene complexity requires advanced solutions. To address data scarcity and improve segmentation performance, we propose a novel unsupervised domain adaptation (UDA) approach that enhances target domain learning. Our method introduces multiple perturbations consistency, leveraging spatial context within the target domain to improve recognition. By applying perturbations at input and feature levels and using a consistency loss, we enhance contextual learning. Additionally, a weight mapping technique reduces the impact of detrimental source domain information. Experimental results demonstrate that our approach outperforms baseline methods on the GTAV→Cityscapes and SYNTHIA→Cityscapes datasets.
Unsupervised domain adaptation, Self-training, Autonomous driving, Information technology, T58.5-58.64, Semantic segmentation
Unsupervised domain adaptation, Self-training, Autonomous driving, Information technology, T58.5-58.64, Semantic segmentation
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
