Abstract

The ability to deal with intra and inter-modality features has been critical to the development of RGB-D salient object detection. While many works have advanced in leaps and bounds in this field, most existing methods have not taken their way down into the inherent differences between the RGB and depth data due to widely adopted conventional convolution in which fixed parameter kernels are applied during inference. To promote intra and inter-modality interaction conditioned on various scenarios, as RGB and depth data are processed independently and later fused interactively, we develop a new insight and a better model. In this paper, we introduce a criss-cross dynamic filter network by decoupling dynamic convolution. First, we propose a Model-specific Dynamic Enhanced Module (MDEM) that dynamically enhances the intra-modality features with global context guidance. Second, we propose a Scene-aware Dynamic Fusion Module (SDFM) to realize dynamic feature selection between two modalities. As a result, our model achieves accurate predictions of salient objects. Extensive experiments demonstrate that our method achieves competitive performance over 28 state-of-the-art RGB-D methods on 7 public datasets. The code and results of our method are available at https://github.com/OIPLab-DUT/C2DFNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call