Abstract

The use of RGB-D information for salient object detection (SOD) is being increasingly explored. Traditional multilevel models handle both low- and high-level features similarly, as they use the same number of features for blending. Unlike these models, in this paper, we propose multilevel reverse-context interactive-fusion (MRI) network (MRINet) for RGB-D SOD. Specifically, first, we extract and reuse different numbers of features depending on their level; the deeper the information, the more times do we perform the extraction. Deeper information contains more semantic cues, which are important for locating salient regions. Thereafter, we use an RGB MRI block (MRIB) to merge RGB information at different levels; furthermore, we use depth features as auxiliary information and an RGB-D MRIB for full merging with RGB information. RGB and RGB-D MRIBs can reconstruct the high-level feature map in high resolution and integrate the low-level feature map to enhance boundary details. Extensive experiments demonstrate the effectiveness of the proposed MRINet and its state-of-the-art performance in RGB-D SOD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call