Abstract

BackgroundBenchmark datasets are essential for both method development and performance assessment. These datasets have numerous requirements, representativeness being one. In the case of variant tolerance/pathogenicity prediction, representativeness means that the dataset covers the space of variations and their effects.ResultsWe performed the first analysis of the representativeness of variation benchmark datasets. We used statistical approaches to investigate how proteins in the benchmark datasets were representative for the entire human protein universe. We investigated the distributions of variants in chromosomes, protein structures, CATH domains and classes, Pfam protein families, Enzyme Commission (EC) classifications and Gene Ontology annotations in 24 datasets that have been used for training and testing variant tolerance prediction methods. All the datasets were available in VariBench or VariSNP databases. We tested also whether the pathogenic variant datasets contained neutral variants defined as those that have high minor allele frequency in the ExAC database. The distributions of variants over the chromosomes and proteins varied greatly between the datasets.ConclusionsNone of the datasets was found to be well representative. Many of the tested datasets had quite good coverage of the different protein characteristics. Dataset size correlates to representativeness but only weakly to the performance of methods trained on them. The results imply that dataset representativeness is an important factor and should be taken into account in predictor development and testing.

Highlights

  • Benchmark datasets are essential for both method development and performance assessment

  • We investigated the representativeness of datasets used for training and testing variant tolerance predictors that are available in VariBench and VariSNP

  • Inclusion of benign variants to datasets for pathogenic variants First, we investigated the relevance of the datasets

Read more

Summary

Introduction

Benchmark datasets are essential for both method development and performance assessment. These datasets have numerous requirements, representativeness being one. Benchmark datasets are essential for method developers as well as for those who want to find the best performing tools. There are a number of requirements for benchmark datasets [1] These include relevance, representativeness, non-redundancy, scalability, reusability, and cases must be experimentally verified and contain both positive and negative examples. The benchmark data should be relevant for the studied phenomenon to be able to capture its characteristics. The data entries should be experimentally verified, not predicted. There must be both positive and negative examples. It is preferable to be able to reuse the dataset for different

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call