Abstract

This paper introduces a novel multi-view multi-learner (MVML) active learning method, in which the different views are generated by a genetic algorithm (GA). The GA-based view generation method attempts to construct diverse, sufficient, and independent views by considering both inter- and intra-view confidences. Hyperspectral data inherently owns high dimensionality, which makes it suitable for multi-view learning algorithms. Furthermore, by employing multiple learners at each view, a more accurate estimation of the underlying data distribution can be obtained. We also implemented a spectral-spatial graph-based semi-supervised learning (SSL) method as the classifier, which improved the performance of the classification task in comparison with supervised learning. The evaluation of the proposed method was based on three different benchmark hyperspectral data sets. The results were also compared with other state-of-the-art AL-SSL methods. The experimental results demonstrated the efficiency and statistically significant superiority of the proposed method. The GA-MVML AL method improved the classification performances by 16.68%, 18.37%, and 15.1% for different data sets after 40 iterations.

Highlights

  • Supervised machine learning methods require an accurate and sufficient labeled set, which is complicated and costly to obtain

  • We propose a novel multi-view multi-learner (MVML) method that is especially characterized by hyperspectral image classification (HIC)

  • Previous studies on MVML methods have proven insufficient [21] in specific areas, which is why this paper proposes semi-supervised MVML active learning, for hyperspectral data

Read more

Summary

Introduction

Supervised machine learning methods require an accurate and sufficient labeled set, which is complicated and costly to obtain. It is even more challenging to provide a labeled set for the training procedure. This is mainly because the ground truth data is generally collected through a field survey and/or visual interpretation, which are both time-consuming and expensive. We usually have a limited amount of sampling data with known labels Both semi-supervised learning (SSL) and active learning (AL) are promising algorithms to address the incorporation of unlabeled data to improve the learning performance [1]. They follow different assumptions about how unlabeled samples can be beneficial. SSL methods attempt to extract more accurate underlying class distributions by considering the unlabeled samples

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call