Abstract
Feature refinement and feature fusion are two key steps in convolutional neural networks–based salient object detection (SOD). In this article, we investigate how to utilize multiple guidance mechanisms to better refine and fuse extracted multi-level features and propose a novel multi-guidance SOD model dubbed as MGuid-Net. Since boundary information is beneficial for locating and sharpening salient objects, edge features are utilized in our network together with saliency features for SOD. Specifically, a self-guidance module is applied to multi-level saliency features and edge features, respectively, which aims to gradually guide the refinement of lower-level features by higher-level features. After that, a cross-guidance module is devised to mutually refine saliency features and edge features via the complementarity between them. Moreover, to better integrate refined multi-level features, we also present an accumulative guidance module, which exploits multiple high-level features to guide the fusion of different features in a hierarchical manner. Finally, a pixelwise contrast loss function is adopted as an implicit guidance to help our network retain more details in salient objects. Extensive experiments on five benchmark datasets demonstrate our model can identify salient regions of an image more effectively compared to most of state-of-the-art models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: ACM Transactions on Multimedia Computing, Communications, and Applications
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.