Abstract
Traditional saliency detection can effectively detect possible objects using an attentional mechanism instead of automatic object detection, and thus is widely used in natural scene detection. However, it may fail to extract salient objects accurately from remote sensing images, which have their own characteristics such as large data volumes, multiple resolutions, illumination variation, and complex texture structure. We propose a sparsity-guided saliency detection model for remote sensing images that uses a sparse representation to obtain the high-level global and background cues for saliency map integration. Specifically, it first uses pixel-level global cues and background prior information to construct two dictionaries that are used to characterize the global and background properties of remote sensing images. It then employs a sparse representation for the high-level cues. Finally, a Bayesian formula is applied to integrate the saliency maps generated by both types of high-level cues. Experimental results on remote sensing image datasets that include various objects under complex conditions demonstrate the effectiveness and feasibility of the proposed method.
Highlights
Object detection in remote sensing images is of vital importance and has great potential in many fields such as navigation reconnaissance, autonomous navigation, scene understanding, geological survey, and precision-guided systems
We propose a sparsity-guided saliency model (SGSM) that combines global cues with background priors for saliency detection in remote sensing images
Methods based on background priors such as dense and sparse reconstruction (DSR) and graph-based manifold ranking (GBMR) fail to accurately detect salient regions, when the salient regions have a similar appearance to the background
Summary
Object detection in remote sensing images is of vital importance and has great potential in many fields such as navigation reconnaissance, autonomous navigation, scene understanding, geological survey, and precision-guided systems. Sun et al.[1] employed a combination of edge- and graph-based visual saliency models by fusing two saliency maps to detect salient regions in remote sensing images. We propose a sparsity-guided saliency model (SGSM) that combines global cues with background priors for saliency detection in remote sensing images. Our proposed model takes a sparse representation approach by measuring the relationship between image patches and a dictionary to generate an objective saliency map This method exploits a sparse representation to produce high-level cues via global-based and background-based dictionaries. These two dictionaries are, respectively, obtained by low-level cues based on global cues and the background prior, and they contain the category information (i.e., object or background).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.