Abstract

Aerial image segmentation usually requires a large amount of pixel-level masks in order to achieve quality performance. Obtaining these annotations can be both costly and time-consuming, limiting the amount of data available for training. In this paper, we present an approach for learning to segment aerial building footprints in the absence of fully annotated label masks. Instead, we exploit cheap and efficient scribble annotations to supervise deep convolutional neural networks for segmentation. Our proposed model is based on an adversarial architecture that jointly trains two networks to produce building footprint segmentations that resemble synthetic label masks. We present competitive segmentation results on the Massachusetts Buildings data set by using only scribble supervision signals. Further experiments show that our method effectively alleviates building instance separation issue and displays strong robustness towards different scribble instance levels. We believe our cost-effective approach has the potential to be adapted for other aerial image interpretation tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.