Abstract

In recent years, the computer vision domain has witnessed a surge of interest in interactive object segmentation, an area of study that seeks to expedite the annotation process for pixel-wise segmentation tasks through user guidance. Despite this growing focus, existing methods mainly focus on a single type of pre-annotation and neglect the quality of boundary prediction, which significantly influences subsequent manual adjustments to segmentation boundaries. To address these limitations, we introduce a novel end-to-end network to facilitate more precise building segmentation using diverse types of user guidance. In our proposed method, a centroid map is generated to provide foreground prior information crucial to the subsequent segmentation procedure, and the boundary correction module automatically refines the segmentation mask from existing segmentation networks. Extensive experiments on two popular building extraction datasets demonstrate that our method outperforms all existing approaches given various user guidance (bounding boxes, inside-outside points, or extreme points), achieving the IoU scores of over 95% on SpaceNet-Vegas dataset and over 93% on Inria-building dataset. The remarkable performance of our method further demonstrates its immense potential to alleviate the labor-intensive annotation process associated with remote sensing datasets. The code of our proposed method is available at https://github.com/StephenDHYang/UGBS-pytorch.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call