Abstract
Training a deep neural network for semantic segmentation relies on pixel-level ground truth labels for supervision. However, collecting large datasets with pixel-level annotations is very expensive and time consuming. One workaround is to utilize synthetic data where we can generate potentially unlimited data with their corresponding ground truth labels. Unfortunately, networks trained on synthetic data perform poorly on real images due to the domain shift problem. Domain adaptation techniques have shown potential in transferring the knowledge learned from synthetic data to real world data. Prior works have mostly leveraged on adversarial training to perform a global aligning of features. However, we observed that background objects have lesser variations across different domains as opposed to foreground objects. Using this insight, we propose a method for domain adaptation that models and adapts foreground objects and background objects separately. Our approach starts with a fast style transfer to match the appearance of the inputs. This is followed by a foreground adaptation module that learns a foreground mask that is used by our gated discriminator in order to adapt the foreground and background objects separately. We demonstrate in our experiments that our model outperforms several state-of-the-art baselines in terms of mean intersection over union (mIoU).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.