Abstract
Recent works on salient object detection (SOD) mainly focus on pixel-level classification by leveraging fully convolutional network (FCN)-based encoder-decoder models. In this paper, considering that the context relation plays a critical role in defining a salient object appearing in a scene, we propose a local-to-global context-aware feature augmentation network, namely LGCNet. A two-branch attention-based context relation modeling structure is designed by considering global context-aware information based on foreground/background cues and global feature representations. A pixel-wise self-attention mechanism is then incorporated for both branches to propagate global context information to local feature representations. As a result, a coarse-to-fine salient object detection model is formulated. The whole framework can be trained in an end-to-end manner under a deeply supervised framework. Experimental results demonstrate the effectiveness of key components proposed in our LGCNet. Our LGCNet achieves promising results in comparison with 18 state-of-the-art methods on six widely-used benchmark datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.