Abstract

In this paper, we propose a deep multimodal feature learning (DMFL) network for RGB-D salient object detection. The color and depth features are firstly extracted from low level to high level feature using CNN. Then the features at the high layer are shared and concatenated to construct joint feature representation of multi-modalities. The fused features are embedded to a high dimension metric space to express the salient and non-salient parts. And also a new objective function, consisting of cross-entropy and metric loss, is proposed to optimize the model. Both pixel and attribute level discriminative features are learned for semantical grouping to detect the salient objects. Experimental results show that the proposed model achieves promising performance and has about 1% to 2% improvement to conventional methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.