Abstract

Being able to predict the salient object is of fundamental importance in image processing and computer vision. With numerous approaches proposed for automatic image and video salient object detection, much less work has been dedicated to detecting and segmenting salient objects from light fields. In this article, based on the intrinsic characteristics of light fields, we carefully explore the complementary coherence among multiple cues including spatial, edge and depth information, and elaborately design a multi-task collaborative network for light field salient object detection. More specifically, the correlation mechanisms among edge detection, depth inference and salient object detection are carefully investigated to facilitate the representative saliency features. We first model the coherence among low-level features and heuristic semantic priors, as well as the edge information. Subsequently, the depth-oriented saliency features are derived from the geometry of light fields, in which the 3D convolution operation is leveraged with powerful representation capability to model the disparity correlations among multiple viewpoint images. Finally, a feature-enhanced salient object generator is developed to integrate these complementary saliency features, leading to the final salient object predictions for light fields. Quantitative and qualitative experiments demonstrate the superiority of our proposed model against the state-of-the-art methods over the public light field salient object detection datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.