Abstract

ObjectiveRecent methods for RGB-D salient object detection are capable of achieving desirable performance by leveraging depth features extracted using convolutional neural networks (CNNs). However, the lack of pretrained networks currently available for extracting representative features from depth data, ambiguity in the fusion of depth and RGB features, and the contamination effect originating from unreliable depth data still hinder the further improvement of these methods. Recently, a boundary cue was embedded in a deep model to highlight the boundary region of a salient object and improve the performance of SOD. Furthermore, other methods use Backbone to extract depth map features, which increases the complexity of the model. MethodsBased on this, we propose a lightweight boundary enhancement network (LBENet) for RGB-D salient object detection. The proposed LBENet comprises two main modules: a normalized max-min filter (NMMF) and a boundary weight module (BWM). First, instead of directly using CNNs to extract features from the depth map, which may be suboptimal owing to the use of unreliable depth data, we used the NMMF to extract boundary information from the raw depth data that contain less noisy features and rich boundary information and used the BWM to adaptatively enhance the RGB boundary region features. A boundary loss function allows for the learning of precise boundaries of salient objects ResultThe proposed end-to-end method achieves state-of-the-art performance on two widely used benchmarks, while improving the number of FLOPs (44.271 G), number of parameters (15.96 M), model size (65.4 MB), and inference speed (29.41 frame/s).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call