Abstract

Traditional saliency detection can effectively detect possible objects using an attentional mechanism instead of automatic object detection, and thus is widely used in natural scene detection. However, it may fail to extract salient objects accurately from remote sensing images, which have their own characteristics such as large data volumes, multiple resolutions, illumination variation, and complex texture structure. We propose a sparsity-guided saliency detection model for remote sensing images that uses a sparse representation to obtain the high-level global and background cues for saliency map integration. Specifically, it first uses pixel-level global cues and background prior information to construct two dictionaries that are used to characterize the global and background properties of remote sensing images. It then employs a sparse representation for the high-level cues. Finally, a Bayesian formula is applied to integrate the saliency maps generated by both types of high-level cues. Experimental results on remote sensing image datasets that include various objects under complex conditions demonstrate the effectiveness and feasibility of the proposed method.

Highlights

  • Object detection in remote sensing images is of vital importance and has great potential in many fields such as navigation reconnaissance, autonomous navigation, scene understanding, geological survey, and precision-guided systems

  • We propose a sparsity-guided saliency model (SGSM) that combines global cues with background priors for saliency detection in remote sensing images

  • Methods based on background priors such as dense and sparse reconstruction (DSR) and graph-based manifold ranking (GBMR) fail to accurately detect salient regions, when the salient regions have a similar appearance to the background

Read more

Summary

Introduction

Object detection in remote sensing images is of vital importance and has great potential in many fields such as navigation reconnaissance, autonomous navigation, scene understanding, geological survey, and precision-guided systems. Sun et al.[1] employed a combination of edge- and graph-based visual saliency models by fusing two saliency maps to detect salient regions in remote sensing images. We propose a sparsity-guided saliency model (SGSM) that combines global cues with background priors for saliency detection in remote sensing images. Our proposed model takes a sparse representation approach by measuring the relationship between image patches and a dictionary to generate an objective saliency map This method exploits a sparse representation to produce high-level cues via global-based and background-based dictionaries. These two dictionaries are, respectively, obtained by low-level cues based on global cues and the background prior, and they contain the category information (i.e., object or background).

Sparsity-Guided Saliency Model
Low-level Feature Description via Global Cues and Background Prior
High-Level Feature Transformation Using a Sparse Representation
Sparse Representation-Based Saliency Computation
Saliency Map Integration
Object-biased Gaussian smoothing
Bayesian integration
Algorithm
Multiple Scales Integration
Databases
Experimental Setup
Combining global cues and background prior information
Selection of multiple scales
Precision and recall curves and F-measure
Comparison with 10 State-of-the-Art Methods
Single salient object detection
Multiple salient object detection
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call