Abstract

Recent studies on semantic segmentation are exploiting contextual information to address the problem of inconsistent parsing prediction in big objects and ignorance in small objects. However, they utilize multilevel contextual information equally across pixels, overlooking those different pixels may demand different levels of context. Motivated by the above-mentioned intuition, we propose a novel global-guided selective context network (GSCNet) to adaptively select contextual information for improving scene parsing. Specifically, we introduce two global-guided modules, called global-guided global module (GGM) and global-guided local module (GLM), to, respectively, select global context (GC) and local context (LC) for pixels. When given an input feature map, GGM jointly employs the input feature map and its globally pooled feature to learn its global contextual demand based on which per-pixel GC is selected. While GLM adopts low-level feature from the adjacent stage as LC and synthetically models the input feature map, its globally pooled feature and LC to generate local contextual demand, based on which per-pixel LC is selected. Furthermore, we combine these two modules as a selective context block and import such SCBs in different levels of the network to propagate contextual information in a coarse-to-fine manner. Finally, we conduct extensive experiments to verify the effectiveness of our proposed model and achieve state-of-the-art performance on four challenging scene parsing data sets, i.e., Cityscapes, ADE20K, PASCAL Context, and COCO Stuff. Especially, GSCNet-101 obtains 82.6% on Cityscapes test set without using coarse data and 56.22% on ADE20K test set.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.