Abstract

The difference in the matrix present in soil samples from different areas limits the performance of nutrient analysis via XRF sensors, and only a few strategies to mitigate this effect to ensure an accurate analysis have been proposed so far. In this context, this research aimed to compare the performance of different predictive models, including a simple linear regression (RS), multiple linear regression (MLR), partial least-squares regression (PLS), and random forest (RF) models for the prediction of Ca and K in agricultural soils. RS models were evaluated on XRF data without (RS1) and with (RS2) Compton normalization. In addition, it was assessed whether using soil texture information and/or vis–NIR spectra as auxiliary variables would optimize the predictive performance of the models. The results showed that all strategies allowed the mitigation of the matrix effect to some degree, enabling the determination of their Ca and K contents with excellent predictive performance (R2 ≥ 0.84). The best performance was obtained using RS2 for the Ca prediction (R2 = 0.92, RSME = 48.25 mg kg−1 and relative improvement (RI) of 52.3% compared to RS1) and using an RF for the K prediction (R2 = 0.84, RSME = 17.43 mg kg−1 and RI of 24.3% compared to RS1). The results indicated that sophisticated models did not always perform better than linear models. Furthermore, using texture data and vis–NIR spectra as auxiliary data was promising only for the K prediction, which showed an error reduction in the order of 10%, contrasting with the Ca prediction, which did not reduce the prediction error by more than 1%. The best modeling approach in our study proved to be attribute-specific. These results give further insight into the development of intelligence modeling strategies for sensor-based soil analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call