Abstract

AbstractThe effective integration of RGB and depth map features to improve the performance of RGB‐D salient object detection (SOD) has garnered significant research interest. The existing dual‐stream models can be used for high‐level feature fusion or unidirectionally transferring depth features to RGB features; however, they are unable to fully exploit the differences in modality. Furthermore, owing to the influence of image background information, the generated salient object is affected by background swallow. Herein, a three‐stream RGB‐D SOD method based on cross‐layer and cross‐modal dual‐attention (CMDA) fusion is proposed. In the encoding stage, the CMDA fusion module is used to fuse RGB and depth features layer by layer. Through this module, merged interactive features may be used to extract the richer features of salient objects, realize the commonality and complementarity of fusion features, and achieve effective cross‐modal fusion. In addition, for the decoding stage, a cross‐level feature fusion module that introduces global context features into the up‐sampling process, reduces the impact of salient objects being swallowed by the background, and helps to accurately detect salient areas is proposed. Three different branch features are used for simultaneous end‐to‐end training. The experimental results demonstrate that the proposed method outperforms other methods in terms of multiple evaluation metrics on four datasets. Furthermore, the authors visualize the precision–recall curve, F‐measure curve, and saliency map, which indicate that the detection effect of the proposed method is superior to those of other methods. During the testing stage, our model ran at 14 frames per second (FPS).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call