Abstract

BackgroundSet-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers.MethodsWe define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach.ResultsThe novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers.ConclusionNovel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.Electronic supplementary materialThe online version of this article (doi:10.1186/s12859-015-0786-7) contains supplementary material, which is available to authorized users.

Highlights

  • Set-level classification of gene expression data has received significant attention recently

  • Unlike in more conventional gene expression analysis, the set-level approach assumes that high-dimensional vectors of gene expressions are represented by lower-dimensional vectors of aggregated expressions

  • We repeated the experiments with aggregation based on principal component analysis (PCAgg) and with SetSig aggregation [1]. We found that both PCAgg and SetSig lead to worse predictive accuracy than aggregation by averaging when using the novel gene sets and to statistically insignificant differences for Gene Ontology [7] (GO)+The Kyoto Encyclopedia of genes and genomes (KEGG) gene sets

Read more

Summary

Introduction

Set-level classification of gene expression data has received significant attention recently In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The sets are specified in terms of formalized biological background knowledge; a single set may e.g. collect all genes acting in a specific metabolic pathway Given the entailed reduction in sample dimensionality, the set-level approach should lead to a decreased risk of overfitting potentially resulting in improved accuracy of induced predictive models. This expectation was not confirmed by empirical research [1, 2, 5, 6]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call