Abstract

Background: Hyperspectral imaging systems face numerous challenges in acquiring accurate spatial-spectral hypercubes due to sample surface heterogeneity, environmental instability, and instrumental noise. Preprocessing strategies such as outlier detection, calibration, smoothing, and normalization are typically employed to address these issues, selecting appropriate techniques based on prediction performance evaluation. However, the risk of misusing inappropriate preprocessing methods remains a concern. Methods: In this study, we evaluate the impact of five normalization methods on the classification performance of six different classifiers using honey hyperspectral images. Our results show that different classifiers have varying compatible normalization techniques and that using Batch Normalization with Convolutional Neural Networks (CNN) can significantly improve classification performance and diminish the variations among other normalization techniques. The CNN with Batch Normalization can achieve a macro average F1 score of ≥0.99 with four different normalization methods and ≥0.97 without normalization. Furthermore, we analyze kernel weights' distribution in the CNN models' final convolutional layers using statistical measurements and kernel density estimation (KDE) graphs. Results: We find that the performance improvements resulting from adding BatchNorm layers are associated with kernel weight range, kurtosis, and density around 0. However, the differences among normalization methods do not show a strong correlation with kernel weight distribution. In conclusion, our findings demonstrate that the CNN with Batch Normalization layers can achieve better prediction results and avoid the risk of inappropriate normalization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call