Abstract

The dependence on large-scale pixel-level annotations brings great challenge to semantic segmentation task for remote sensing images (RSIs). To alleviate this issue, we propose V2RNet, an unsupervised semantic segmentation method which introduces adversarial learning into segmentation network. Our method creatively transfers the segmentation model from the synthetic GTA-V data to the real optical remote sensing data via domain adaptation. Additionally, to unify the source domain semantic structures and target domain image style, we design a semantic segmentation discriminator as auxiliary to optimize the domain adaptation efficiency. Thus the proposed method is effective on typical remote sensing targets such densely arranged, intertwined road. Experimental results on Massachusetts Road data set demonstrate our unsupervised semantic segmentation model achieves comparable segmentation accuracy, which also validates the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call