Abstract
Weakly Supervised Semantic Segmentation (WSSS) provides efficient solutions for semantic image segmentation using image-level annotations. WSSS requires no pixel-level labeling that Fully Supervised Semantic Segmentation (FSSS) does, which is time-consuming and label-intensive. Most WSSS approaches have leveraged Class Activation Maps (CAM) or Self-Attention (SA) to generate pseudo pixel-level annotations to perform semantic segmentation tasks coupled with fully supervised approaches (e.g., Fully Convolutional Network). However, those approaches often provides incomplete supervision that mainly includes discriminative regions from the last convolutional layer. They may fail to capture regions of low- or intermediate-level features that may not be present in the last convolutional layer. To address the issue, we proposed a novel Multi-layered Self-Attention (Multi-SA) method that applies a self-attention module to multiple convolutional layers, and then stack feature maps from the self-attention layers to generate pseudo pixel-level annotations. We demonstrated that integrated feature maps from multiple self-attention layers produce higher coverage in semantic segmentation than using only the last convolutional layer through intensive experiments using standard benchmark datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.