Abstract

Automatic detection of geospatial targets in cluttered scenes is a profound challenge in the field of aerial and satellite image analysis. In this paper, we propose a novel practical framework enabling efficient and simultaneous detection of multi-class geospatial targets in remote sensing images (RSI) by the integration of visual saliency modeling and the discriminative learning of sparse coding. At first, a computational saliency prediction model is built via learning a direct mapping from a variety of visual features to a ground truth set of salient objects in geospatial images manually annotated by experts. The output of this model can predict a small set of target candidate areas. Afterwards, in contrast with typical models that are trained independently for each class of targets, we train a multi-class object detector that can simultaneously localize multiple targets from multiple classes by using discriminative sparse coding. The Fisher discrimination criterion is incorporated into the learning of a dictionary, which leads to a set of discriminative sparse coding coefficients having small within-class scatter and big between-class scatter. Multi-class classification can be therefore achieved by the reconstruction error and discriminative coding coefficients. Finally, the trained multi-class object detector is applied to those target candidate areas instead of the entire image in order to classify them into various categories of target, which can significantly reduce the cost of traditional exhaustive search. Comprehensive evaluations on a satellite RSI database and comparisons with a number of state-of-the-art approaches demonstrate the effectiveness and efficiency of the proposed work.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.