Abstract

BackgroundThe analysis of survival data allows to evaluate whether in a population the genetic exposure is related to the time until an event occurs. Owing to the complexity of common human diseases, there is the incipient need to develop bioinformatics tools to properly model non-linear high-order interactions in lifetime datasets. These tools, such as the survival dimensionality reduction algorithm, may suffer from extreme computational costs in large-scale datasets. Herein, we address the problem of estimating the quality of attributes, so as to extract relevant features from lifetime datasets and to scale down their size. MethodsThe ReliefF algorithm was modified and adjusted to compensate for the loss of information due to censoring, introducing reclassification and weighting schemes. Synthetic lifetime two-locus epistatic datasets of 500 attributes, 400–800 individuals and different degrees of cumulative heritability and censorship were generated. The capability of the survival ReliefF algorithm (sReliefF) and of a tuned sReliefF approach to properly select the causative pair of attributes was evaluated and compared to univariate selection based on Cox scores. Results/conclusionssReliefF methods efficiently scaled down the simulated datasets, whilst univariate selection performed no better than random choice. These approaches may help to reduce the computational cost and to improve the classification task of algorithms that model high-order interactions in presence of right-censored data. Availability: http://sourceforge.net/projects/sdrproject/files/sReliefF/.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call