Abstract

As a fundamental requirement to many computer vision systems, saliency detection has experienced substantial progress in recent years based on deep neural networks (DNNs). Most DNN-based methods rely on either sparse or dense labeling, and thus they are subject to the inherent limitations of the chosen labeling schemes. DNN dense labeling captures salient objects mainly from global features, which are often hampered by other visually distinctive regions. On the other hand, DNN sparse labeling is usually impeded by inaccurate presegmentation of the images that it depends on. To address these limitations, we propose a new framework consisting of two pathways and an Aggregator to progressively integrate the DNN sparse and DNN dense labeling schemes to derive the final saliency map. In our "zipper" type aggregation, we propose a multiscale kernels approach to extract optimal criteria for saliency detection where we suppress nonsalient regions in the sparse labeling while guiding the dense labeling to recognize more complete extent of the saliency. We demonstrate that our method outperforms in saliency detection compared to other 11 state-of-the-art methods across six well-recognized benchmarking datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.