Abstract

The accuracy of laser-induced breakdown spectroscopy (LIBS) methods for analyzing geological samples is improved when calibration standards and unknown targets are compositionally similar. A recent study suggests that customized submodels can be used to optimize calibration datasets to achieve more accurate predictions [1]. In practice, this is difficult to implement because the errors inherent in the methods used for sorting unknown targets by composition may affect how successfully this matching can occur. Moreover, creation of submodels intrinsically reduces the size of the dataset on which the model is trained, which has been shown to reduce prediction accuracy. This paper uses LIBS spectra of 2990 unique rock powder standards to compare the accuracy of 1) submodels generated for each element over its geochemical range, 2) submodels created using SiO2 content only, 3) submodels created using the ratio of Si(II)/Si(I) emission lines to group spectra by a proxy for approximate plasma temperature, and 4) models created using all data. Results indicate that prediction accuracies are not always improved by creating submodels because subdividing a dataset to optimize calibrations will always result in a smaller database available for each submodel, and the reduced training set size negatively affects accuracy. Customized LIBS standards for specific applications might overcome this problem in cases where the matrix is similar and the expected concentration range is known. But in a majority of geochemical applications, submodel approaches are only useful in improving prediction accuracies when the initial database is itself extensive enough to support large, robust submodel calibration suites.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call