Abstract
Representing contextual features at multiple scales is important for RGB-D SOD. Recently, due to advances in backbone convolutional neural networks (CNNs) revealing stronger multi-scale representation ability, many methods achieved comprising performance. However, most of them represent multi-scale features in a layer-wise manner, which ignores the fine-grained global contextual cues in a single layer. In this paper, we propose a novel global contextual exploration network (GCENet) to explore the performance gain of multi-scale contextual features in a fine-grained manner. Concretely, a cross-modal contextual feature module (CCFM) is proposed to represent the multi-scale contextual features at a single fine-grained level, which can enlarge the range of receptive fields for each network layer. Furthermore, we design a multi-scale feature decoder (MFD) that integrates fused features from CCFM in a top-down way. Extensive experiments on five benchmark datasets demonstrate that the proposed GCENet outperforms the other state-of-the-art (SOTA) RGB-D SOD methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Visual Communication and Image Representation
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.