Abstract
Saliency detection is a hot topic in recent years and much efforts have been made to address it from different perspectives. However, current saliency models cannot meet the needs for diversified scenes due to their limited generalization capability. To tackle this problem, in this paper, we propose a hybrid saliency model, which can fuse heterogeneous visual cues for robust salient object detection. A new contour cue is first introduced to provide discriminative saliency information for scene description. Its realization is based on a discrete optimization objective and can be solved efficiently with an iterative algorithm. Followed by this, the contour cue is taken as a part of a hybrid sparse learning model, in which cues from different domains can interact and complement with each other for joint saliency fusion. This saliency fusion model is parameter-free and its numerical solution can be obtained using gradient descent methods. Finally, we advance an object proposal-based collaborative filtering strategy to generate high quality saliency maps from the above fusion results. Compared with traditional methods, the proposed saliency model can fuse heterogeneous cues in a unified optimization framework rather than combine them separately. Therefore, it has favorable modeling capability under diversified scenes where the saliency patterns appear quite differently. To verify the effectiveness of the proposed method, we take experiments on four large saliency benchmark datasets and compare it with other 26 state-of-the-art saliency models. Both qualitative and quantitative evaluation results indicate the superiority of our method, especially in challenging situations. Besides, we apply our saliency model to ship detection of radar platforms and promising results are obtained over traditional detectors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.