Abstract

AbstractRGB depth (RGB‐D) salient object detection (SOD) is a meaningful and challenging task, which has achieved good detection performance in dealing with simple scenes using convolutional neural networks, however, it cannot effectively handle scenes with complex contours of salient objects or similarly coloured salient objects and background. A novel end‐to‐end framework is proposed for RGB‐D SOD, which comprises of four main components: the cross‐modal attention feature enhancement (CMAFE) module, the multi‐level contextual feature interaction (MLCFI) module, the boundary feature extraction (BFE) module, and the multi‐level boundary attention guidance (MLBAG) module. The CMAFE module retains the more effective salient features by employing a dual‐attention mechanism to filter noise from two modalities. In the MLCFI module, a shuffle operation is used for high‐level and low‐level channels to promote cross‐channel information communication, and rich semantic information is extracted. The BFE module converts salient features into boundary features to generate boundary maps. The MLBAG module produces saliency maps by aggregating multi‐level boundary saliency maps to guide cross‐modal features in the decode stage. Extensive experiments are conducted on six public benchmark datasets, with the results demonstrating that the proposed model significantly outperforms 23 state‐of‐the‐art RGB‐D SOD models with regards to multiple evaluation metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call