Abstract

Objective. Tree-based models belong to common, assumption-free methods of data analysis. Their application in survival data is narrowed to univariate models, which partition the feature space with axis-parallel hyperplanes, meaning that each internal node involves a single feature. In this paper, I extend the idea of oblique survival tree induction for competing risks by modifying a piecewise-linear criterion function. Additionally, the use of tree-based ensembles to analyze the competing events is proposed.Method and materials. Two types of competing risks trees are proposed: a single event tree designed for analysis of the event of interest and a composite event tree, in which all the competing events are taken into account. The induction process is similar, except that the calculation of the criterion function is minimized for the individual tree nodes generation. These two tree types were also used for building the ensembles with aggregated cumulative incidence functions as an outcome. Nine real data sets, as well as a simulated data set, were taken to assess performance of the models, while detailed analysis was conducted on the basis of follicular cell lymphoma data.Results. The evaluation was focused on two measures: the prediction error expressed by an integrated Brier score (IBS), and the ranked measure of predictive ability calculated as a time-truncated concordance index (C–index). The proposed techniques were compared with the existing approaches of the Fine–Gray subdistribution hazard model, Fine–Gray regression model with backward elimination, and random survival forest for competing risks. The results for both the IBS and the C–index indicated statistically significant differences between these methods (p < .0001).Conclusions. The prediction error of the individual trees was similar to the other methods, but the results of the C–index differ in comparison to the Fine–Gray subdistribution hazard model and the Fine–Gray regression with backward elimination. The ensembles prediction ability was comparable to existing algorithms, but their IBS values were better than either random survival forest or Fine–Gray regression with backward elimination.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call