Abstract

Salient object detection is a fundamental problem in both pattern recognition and image processing tasks. Previous salient object detection algorithms usually involve various features based on priors/assumptions about the properties of the objects. Inspired by the effectiveness of recently developed deep feature learning, we propose a novel Salient Object Detection via a Local and Global method based on Deep Residual Network model (SOD-LGDRN) for saliency computation. In particular, we train a deep residual network (ResNet-G) to measure the prominence of the salient object globally and extract multiple level local features via another deep residual network (ResNet-L) to capture the local property of the salient object. The final saliency map is obtained by combining the local-level and global-level saliency via Bayesian fusion. Quantitative and qualitative experiments on six benchmark datasets demonstrate that our SOD-LGDRN method outperforms eight state-of-the-art methods in the salient object detection.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.