Abstract

A key challenge in the field of Quantitative Structure Activity Relationships (QSAR) is how to effectively treat experimental error in the training and evaluation of computational models. It is often assumed in the field of QSAR that models cannot produce predictions which are more accurate than their training data. Additionally, it is implicitly assumed, by necessity, that data points in test sets or validation sets do not contain error, and that each data point is a population mean. This work proposes the hypothesis that QSAR models can make predictions which are more accurate than their training data and that the error-free test set assumption leads to a significant misevaluation of model performance. This work used 8 datasets with six different common QSAR endpoints, because different endpoints should have different amounts of experimental error associated with varying complexity of the measurements. Up to 15 levels of simulated Gaussian distributed random error was added to the datasets, and models were built on the error laden datasets using five different algorithms. The models were trained on the error laden data, evaluated on error-laden test sets, and evaluated on error-free test sets. The results show that for each level of added error, the RMSE for evaluation on the error free test sets was always better. The results support the hypothesis that, at least under the conditions of Gaussian distributed random error, QSAR models can make predictions which are more accurate than their training data, and that the evaluation of models on error laden test and validation sets may give a flawed measure of model performance. These results have implications for how QSAR models are evaluated, especially for disciplines where experimental error is very large, such as in computational toxicology.Graphical

Highlights

  • One of the key challenges in Quantitative Structure Activity Relationship (QSAR) modeling is evaluating the predictive performance of models, and evaluation methodology has been the subject of many studies in the past several decades [1,2,3,4,5,6]

  • The purpose of this work is to examine the common assumption that QSAR models cannot make predictions which are more accurate than their training data

  • When multiple values are available, there are still too few to reliably approximate the population mean for the measurement. This means that QSAR models are built on data which may poorly capture the physical reality of the trends being modeled

Read more

Summary

Introduction

One of the key challenges in Quantitative Structure Activity Relationship (QSAR) modeling is evaluating the predictive performance of models, and evaluation methodology has been the subject of many studies in the past several decades [1,2,3,4,5,6]. The most problematic assumption about errors implicitly made during most QSAR modeling is that the given value for any experimental endpoint is the “true” value for that measurement. To put all of this in more rigorous statistical terms, the assumption is made that the given experimental value is the sample mean, and that this sample mean sufficiently approximates the population mean (true value) of all possible measurements [11]. The assumption that experimental endpoints are true values ignores the reality that experimental measurements have a distribution and uncertainty associated with them, and this statistical reality has important effects on the predictivity of QSAR models.

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.