Abstract

In this work, we propose a novel light field fusion network-LFNet, a CNNs-based light field saliency model using 4D light field data containing abundant spatial and contextual information. The proposed method can reliably locate and identify salient objects even in a complex scene. Our LFNet contains a light field refinement module (LFRM) and a light field integration module (LFIM) which can fully refine and integrate focusness, depths and objectness cues from light field image. The LFRM learns the light field residual between light field and RGB images for refining features with useful light field cues, and then the LFIM weights each refined light field feature and learns spatial correlation between them to predict saliency maps. Our method can take full advantage of light field information and achieve excellent performance especially in complex scenes, e.g., similar foreground and background, multiple or transparent objects and low-contrast environment. Experiments show our method outperforms the state-of-the-art 2D, 3D and 4D methods across three light field datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call