Abstract

In recent years, kernel methods, in particular support vector machines (SVMs), have been successfully introduced to remote sensing image classification. Their properties make them appropriate for dealing with a high number of image features and a low number of available labeled spectra. The introduction of alternative approaches based on (parametric) Bayesian inference has been quite scarce in the more recent years. Assuming a particular prior data distribution may lead to poor results in remote sensing problems because of the specificities and complexity of the data. In this context, the emerging field of nonparametric Bayesian methods constitutes a proper theoretical framework to tackle the remote sensing image classification problem. This paper exploits the Bayesian modeling and inference paradigm to tackle the problem of kernel-based remote sensing image classification. This Bayesian methodology is appropriate for both finite- and infinite-dimensional feature spaces. The particular problem of active learning is addressed by proposing an incremental/active learning approach based on three different approaches: 1) the maximum differential of entropies; 2) the minimum distance to decision boundary; and 3) the minimum normalized distance. Parameters are estimated by using the evidence Bayesian approach, the kernel trick, and the marginal distribution of the observations instead of the posterior distribution of the adaptive parameters. This approach allows us to deal with infinite-dimensional feature spaces. The proposed approach is tested on the challenging problem of urban monitoring from multispectral and synthetic aperture radar data and in multiclass land cover classification of hyperspectral images, in both purely supervised and active learning settings. Similar results are obtained when compared to SVMs in the supervised mode, with the advantage of providing posterior estimates for classification and automatic parameter learning. Comparison with random sampling as well as standard active learning methods such as margin sampling and entropy-query-by-bagging reveals a systematic overall accuracy gain and faster convergence with the number of queries.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.