Abstract

In the field of hyperspectral image classification, a widely used way for objective performance evaluation of different classification methods is calculating three accuracy indexes, i.e., the overall accuracy, the average accuracy, and the Kappa coefficient. These accuracy indexes are obtained by comparing the classification results with the ground truth, i.e., a reference classification map labeled by human experts. In this paper, the effect of ground truths on the objective performance evaluation of hyperspectral image classification is studied. The purpose is to investigate, if the ground truth is insufficient, whether the above accuracy indexes can be completely responsible. Furthermore, in order to measure the robustness of different classification methods to those insufficient ground truths, four evaluation metrics, i.e., the Pearson linear correlation coefficient, root-mean-square error, Spearman's rank correlation coefficient, and Kendall's rank correlation coefficient have been adopted for further analysis. Based on these experiments, an interesting conclusion can be obtained that insufficient ground truths may limit the assessment capability of existing accuracy indexes. This underlines that overoptimistic performance evaluations may exist and stresses the demand of designing more appropriate accuracy indexes for objective performance evaluation with insufficient ground truths.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call