Abstract

It is very expensive and time consuming to collect a large enough dataset with pixel-level annotations to train a semantic segmentation model. Synthetic datasets are common alternatives for training segmentation models, however models trained on synthetic data do not necessarily perform well on real world images due to the domain shift problem. Domain adaptation techniques address this problem by leveraging on adversarial training to align features. Prior works have mostly performed global feature alignment. They do not consider the positions of objects. However, objects in urban scenes are highly correlated with their spatial locations. For example, the sky will always appear on top while cars will usually appear in the middle of the image. Based on this insight, we propose a spatial-aware discriminator that accounts for the spatial prior on the objects in order to improve the feature alignment. We demonstrate in our experiments that our model outperforms several state-of-the-art baselines in terms of mean intersection over union (mIoU).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.