Abstract

Inferential research commonly involves identification of causal factors from within high dimensional data but selection of the ‘correct’ variables can be problematic. One specific problem is that results vary depending on statistical method employed and it has been argued that triangulation of multiple methods is advantageous to safely identify the correct, important variables. To date, no formal method of triangulation has been reported that incorporates both model stability and coefficient estimates; in this paper we develop an adaptable, straightforward method to achieve this. Six methods of variable selection were evaluated using simulated datasets of different dimensions with known underlying relationships. We used a bootstrap methodology to combine stability matrices across methods and estimate aggregated coefficient distributions. Novel graphical approaches provided a transparent route to visualise and compare results between methods. The proposed aggregated method provides a flexible route to formally triangulate results across any chosen number of variable selection methods and provides a combined result that incorporates uncertainty arising from between-method variability. In these simulated datasets, the combined method generally performed as well or better than the individual methods, with low error rates and clearer demarcation of the true causal variables than for the individual methods.

Highlights

  • Inferential research commonly involves identification of causal factors from within high dimensional data but selection of the ‘correct’ variables can be problematic

  • Methods have been proposed in the statistical literature to improve variable selection for inference in high dimensional data, including modifications to Akaike information criterion (AIC)/BIC5, and a variety of regularisation methods based on functions that penalise model coefficients to balance over- and under-fitting[6,7,8]

  • Triangulation of multiple methods has been proposed as an aid to identify important ­variables[13]; in this context triangulation refers to conducting a variety of analytic methods on one set of data, on the premise that the most important variables will tend to be identified by most methods

Read more

Summary

Introduction

Inferential research commonly involves identification of causal factors from within high dimensional data but selection of the ‘correct’ variables can be problematic. Methods have been proposed in the statistical literature to improve variable selection for inference in high dimensional data, including modifications to AIC/BIC5, and a variety of regularisation methods based on functions that penalise model coefficients to balance over- and under-fitting (the variance-bias trade off)[6,7,8]. It has been shown, that different methods of variable selection can result in considerable differences in covariates s­ elected[9] and this poses difficult questions for the researcher about which method to choose, as well as presenting wider concerns around variability of results and the reproducibility of s­ cience[10,11]. Resampling, such as bootstrapping, is effective to evaluate selection ­stability[14] and has the advantage of simultaneously providing an estimate of model coefficient d­ istributions[4], both of which can be used to provide a ranking of the relative importance of potential c­ ovariates[16]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call