Abstract

Most existing RGB-D salient detection models pay more attention to the quality of the depth images, while in some special cases, the quality of RGB images may even have greater impacts on saliency detection, which has long been ignored and underestimated. To address this problem, in this paper, we present a Bi-directional Progressive Guidance Network (BPGNet) for RGB-D salient object detection, where the qualities of both RGB and depth images are involved. Since it is usually difficult to determine which modality data have low quality in advance, a bi-directional framework based on progressive guidance (PG) strategy is employed to extract and enhance the unimodal features with the aid of another modality data via the alternative interactions between the saliency prediction results and the extracted features from the multi-modality input data. Specifically, the proposed PG strategy is achieved by using the proposed Global Context Awareness (GCA), Auxiliary Feature Extraction (AFE) and Cross-modality Feature Enhancement (CFE) modules. Benefiting from the proposed PG strategy, the disturbing information within the input RGB and depth images can be well suppressed, while the discriminative information within the input images gets enhanced. On top of that, a Fusion Prediction Module (FPM) is further designed to adaptively select those features with higher discriminability as well as enhancing the common information for the final saliency prediction. Experimental results demonstrate that our proposed model is comparable to those of state-of-the-art RGB-D SOD models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.