Abstract

Abundant light field cues have been demonstrated helpful to boost performance of salient object detection (SOD) in challenging scenarios. However, the majority of existing deep light field SOD models focus on exploring spatial interrelations across focal slices and seldom consider boundary accuracy of salient objects, which inevitability limits the detection performance. Meanwhile, in addition to focal stacks, several other data forms/modalities can be derived simultaneously from the light field. Therefore existing UNet-like strategies widely adopted for a single modality may not be very suitable for this task. To address these issues, we propose a novel multi-modal edge-aware network for light field SOD, named MEANet. MEANet has two innovative components elaborately designed for this task, i.e., the cross-modal compensation (CMC) module and the multi-modal edge supervision (MES) module. CMC utilizes the attention mechanism to explore cross-modal complementarities and overcome the information loss of focal stack cues, whereas MES generates explicit object edges and edge features in order to progressively refine regional features to achieve edge-aware detection. Comprehensive evaluations on four benchmark datasets show that MEANet outperforms state-of-the-art light field and RGB-D/RGB SOD models, generating saliency maps with fine-grained accurate object boundaries effectively from multiple data inputs. The code and models are publicly available at https://github.com/jiangyao-scu/MEANet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.