Abstract

Road extraction from remote sensing images in very high resolution is important for autonomous driving and road planning. Compared with large-scale objects, roads are smaller, winding, and likely to be covered by buildings’ shadows, causing deep convolutional neural networks (DCNNs) to be difficult to identify roads. The paper proposes a semantics-geometry framework (SGNet) with a two-branch backbone, i.e., semantics-dominant branch and geometry-dominant branch. The semantics-dominant branch inputs images to predict dense semantic features, and the geometry-dominant branch takes images to generate sparse boundary features. Then, dense semantic features and boundary details generated by two branches are adaptively fused. Further, by utilizing affinity between neighborhood pixels, a feature refinement module is proposed to refine textures and road details. We evaluate the SGNet on the Ottawa road dataset. Experiments show that the SGNet outperforms other competitors on the road extraction task. Codes is available at https://github.com/qiuluyi/SGNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call