Abstract

The success of near-infrared spectroscopy (NIRS) analysis hinges on the precision and robustness of the calibration model. Shallow learning (SL) algorithms like partial least squares discriminant analysis (PLS-DA) often fall short in capturing the interrelationships between adjacent spectral variables, and the analysis results are easily affected by spectral noise, which dramatically limits the breadth and depth of applications of NIRS. Deep learning (DL) methods, with their capacity to discern intricate features from limited samples, have been progressively integrated into NIRS. In this paper, two discriminant analysis problems, including wheat kernels and Yali pears as examples, and several representative calibration models were used to research the robustness and effectiveness of the model. Additionally, this article proposed a near-infrared calibration model, which was based on the Gramian angular difference field method and coordinate attention convolutional neural networks (G-CACNNs). The research results show that, compared with SL, spectral preprocessing has a smaller impact on the analysis accuracy of consensus learning (CL) and DL, and the latter has the highest analysis accuracy in the modeling results using the original spectrum. The accuracy of G-CACNNs in two discrimination tasks was 98.48% and 99.39%. Finally, this research compared the performance of various models under noise to evaluate the robustness and noise resistance of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.