Abstract

MotivationData splitting is a fundamental step for building classification models with spectral data, especially in biomedical applications. This approach is performed following pre-processing and prior to model construction, and consists of dividing the samples into at least training and test sets; herein, the training set is used for model construction and the test set for model validation. Some of the most-used methodologies for data splitting are the random selection (RS) and the Kennard-Stone (KS) algorithms; here, the former works based on a random splitting process and the latter is based on the calculation of the Euclidian distance between the samples. We propose an algorithm called the Morais-Lima-Martin (MLM) algorithm, as an alternative method to improve data splitting in classification models. MLM is a modification of KS algorithm by adding a random-mutation factor.ResultsRS, KS and MLM performance are compared in simulated and six real-world biospectroscopic applications using principal component analysis linear discriminant analysis (PCA-LDA). MLM generated a better predictive performance in comparison with RS and KS algorithms, in particular regarding sensitivity and specificity values. Classification is found to be more well-equilibrated using MLM. RS showed the poorest predictive response, followed by KS which showed good accuracy towards prediction, but relatively unbalanced sensitivities and specificities. These findings demonstrate the potential of this new MLM algorithm as a sample selection method for classification applications in comparison with other regular methods often applied in this type of data.Availability and implementationMLM algorithm is freely available for MATLAB at https://doi.org/10.6084/m9.figshare.7393517.v1.

Highlights

  • Data splitting is a process used to separate a given dataset into at least two subsets called ‘training’ and ‘test’

  • This is made by using chemometric methods such as principal component analysis linear discriminant analysis (PCA-LDA) (Morais and Lima, 2018), partial least squares discriminant analysis (PLS-DA) (Brereton and Lloyd, 2014), or support vector machines (SVM) (Cortes and Vapnik, 1995)

  • This is made by firstly removing a certain number of samples from the training set and building the classification model with the remaining samples, where the removed samples are predicted as a temporary validation set

Read more

Summary

Introduction

Data splitting is a process used to separate a given dataset into at least two subsets called ‘training’ (or ‘calibration’) and ‘test’ (or ‘prediction’). MLM generated a better predictive performance in comparison with RS and KS algorithms, in particular regarding sensitivity and specificity values. These findings demonstrate the potential of this new MLM algorithm as a sample selection method for classification applications in comparison with other regular methods often applied in this type of data.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.