Abstract

Studies conducted in time series could be far more informative than those that only capture a specific moment in time. However, when it comes to transcriptomic data, time points are sparse creating the need for a constant search for methods capable of extracting information out of experiments of this kind. We propose a feature selection algorithm embedded in a hidden Markov model applied to gene expression time course data on either single or even multiple biological conditions. For the latter, in a simple case-control study features or genes are selected under the assumption of no change over time for the control samples, while the case group must have at least one change. The proposed model reduces the feature space according to a two-state hidden Markov model. The two states define change/no-change in gene expression. Features are ranked in consonance with three scores: number of changes across time, magnitude of such changes and quality of replicates as a measure of how much they deviate from the mean. An important highlight is that this strategy overcomes the few samples limitation, common in transcriptome experiments through a process of data transformation and rearrangement. To prove this method, our strategy was applied to three publicly available data sets. Results show that feature domain is reduced by up to 90% leaving only few but relevant features yet with findings consistent to those previously reported. Moreover, our strategy proved to be robust, stable and working on studies where sample size is an issue otherwise. Hence, even with two biological replicates and/or three time points our method proves to work well.

Highlights

  • High dimensional transcriptome data is described by many features, many are either redundant or irrelevant

  • In this work we address the classification of time series gene expression data using two embedded processes, feature selection and hidden Markov models

  • We present a strategy that selects the most relevant features from high dimensional gene expression experiments with longitudinal design for one or multiple conditions

Read more

Summary

Introduction

High dimensional transcriptome data is described by many features, many are either redundant or irrelevant. Identifying those is key in order to claim results are trustworthy. Machine learning algorithms or statistic models are applied to classify features but at the cost of other important problems such as model over-fitting or the increase of computational resources and higher analysis cost [1] [2]. FS is the process of eliminating irrelevant features in the data by extracting only a subset of informative ones. It is important not to be confounded with FE such as principal components analysis or compression, where they use the projection of the original set to a new feature space of lower dimensionality. FS on the other hand, identifies relevant features without altering the original domain

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call