Abstract

Deep convolutional neural networks (CNNs) have gained prominence in computer vision applications, including RGB salient object detection (SOD), owing to the advancements in deep learning. Nevertheless, the majority of deep CNNs employ either VGGNet or ResNet as their backbone architecture for extracting image information. This approach may lead to the following problems. 1) Variations between imaging modalities during feature extraction across layers. Cross-modal features across layers are often fused in a single step, resulting in inadequate cross-modal feature extraction. 2) Feature long-range dependence problem in multilayer feature decoding. 3) Image boundary blurring. To address these issues, we initially leverage the advantages offered by the VGGNet and ResNet architectures. Additionally, we present a novel hybrid VGG–ResNet feature encoder for RGB-T SOD. Specifically, we introduce a geometry information aggregation module that effectively combines and enhances the VGGNet spatial features of the RGB-T modalities from the bottom to the top. Moreover, we propose a innovative global saliency perception module that progressively refines the ResNet semantic features from the top to the bottom by integrating both local and global information. Furthermore, we introduce a Pearson-gated module to tackle the challenge of long-range dependence between features. This module utilizes gating to merge features by calculating the Pearson correlation coefficients of the fused features at multiple levels. Lastly, we devise an edge-aware module to precisely learn the contours of salient objects, thereby enhancing the clarity of the object boundaries. Extensive experiments conducted on three RGB-T SOD benchmarks demonstrate that our proposed network surpasses the performance of state-of-the-art methods for SOD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call