Abstract
In this paper, superpixel features and extended multi-attribute profiles (EMAPs) are embedded in a multiple kernel learning framework to simultaneously exploit the local and multiscale information in both spatial and spectral dimensions for hyperspectral image (HSI) classification. First, the original HSI is reduced to three principal components in the spectral domain using principal component analysis (PCA). Then, a fast and efficient segmentation algorithm named simple linear iterative clustering is utilized to segment the principal components into a certain number of superpixels. By setting different numbers of superpixels, a set of multiscale homogenous regional features is extracted. Based on those extracted superpixels and their first-order adjacent superpixels, EMAPs with multimodal features are extracted and embedded into the multiple kernel framework to generate different spatial and spectral kernels. Finally, a PCA-based kernel learning algorithm is used to learn an optimal kernel that contains multiscale and multimodal information. The experimental results on two well-known datasets validate the effectiveness and efficiency of the proposed method compared with several state-of-the-art HSI classifiers.
Highlights
At present, hyperspectral images (HSIs) are attracting increasing attention
In this study, following the line of the multiple kernel learning framework, we propose a novel multiscale, adjacent superpixel-based embedded multiple kernel learning method with the extended multi-attribute profile (MASEMAP-MKL) for HSI classification
A comparison with the ground-truth reveals that the proposed MASEMAP-MKL gives a classification map much closer to the ground-truth
Summary
Hyperspectral images (HSIs) are attracting increasing attention. With the fast iteration of hyperspectral sensors, researchers can collect a large amount of HSI data having high spatial resolution and multiple bands that form high-dimensional features, such as complex and fine geometrical structures [1,2]. Many classic machine learning methods can be directly applied to the classification of HSIs, such as naive Bayes, decision trees, K-nearest neighbor (KNN), wavelet analysis, support vector machines (SVMs), random forest (RF), regression trees, ensemble advancement, and linear regression [7,8,9]. These methods either treat the HSI as a combination of several hundreds of gray images and extract the corresponding features for classification or use only spectral features for classification, producing unsatisfactory results [6]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.