Abstract

From the perspective of research methodologists, it is not especially surprising that physicians perform poorly in prognostication.1Gripp S. Moeller S. Bolke E. et al.Survival prediction in terminally ill cancer patients by clinical estimates, laboratory tests, and self-rated anxiety and depression.J Clin Oncol. 2007; 25: 3313-3320Crossref PubMed Scopus (225) Google Scholar, 2Pontin D. Jordan N. Issues in prognostication for hospital specialist palliative care doctors and nurses: a qualitative inquiry.Palliat Med. 2013; 27: 165-171Crossref PubMed Scopus (20) Google Scholar A given physician typically sees a limited number of specific index cases, outcome ascertainment in routine care usually is extremely haphazard, and the clinician is handicapped by a formidable array of human cognitive biases. It is no wonder that prognostication is an activity often avoided by physicians.With the advent of easy-to-use statistical software and the increasing availability of large databases, literally thousands of predictive models have been published in the medical literature, each with the promise of addressing some important clinical problem. One might expect to find these enthusiastically adopted into routine clinical care. Nevertheless, clinical predictive models follow the familiar (now predictable) yet depressing pattern of translation: many are made, few are validated, and almost none are used. For those of us convinced on principle that prognosis is a critical part of clinical care and that accurate prognostication should guide clinical decision making and therapeutic choice,3Kent D.M. Hayward R.A. Limitations of applying summary results of clinical trials to individual patients: the need for risk stratification.JAMA. 2007; 298: 1209-1212Crossref PubMed Scopus (382) Google Scholar this remains a remarkably stubborn and curious observation.However, important counterexamples have begun to emerge. Contemporary cardiovascular and cerebrovascular clinical practice guidelines, for instance, advocate and incorporate the use of risk prediction tools to guide clinical decisions,4Pearson T.A. Blair S.N. Daniels S.R. et al.AHA Guidelines for Primary Prevention of Cardiovascular Disease and Stroke: 2002 update: Consensus panel guide to comprehensive risk reduction for adult patients without coronary or other atherosclerotic vascular diseases. American Heart Association Science Advisory and Coordinating Committee.Circulation. 2002; 106: 388-391Crossref PubMed Scopus (1610) Google Scholar, 5Stone N.J. Robinson J.G. Lichtenstein A.H. et al.2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.J Am Coll Cardiol. 2014; 63: 2889-2934Crossref PubMed Scopus (3230) Google Scholar, 6Goff Jr., D.C. Lloyd-Jones D.M. Bennett G. et al.2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.J Am Coll Cardiol. 2014; 63: 2935-2959Crossref PubMed Scopus (1819) Google Scholar, 7Kernan W.N. Ovbiagele B. Black H.R. et al.Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association.Stroke. 2014; 45: 2160-2236Crossref PubMed Scopus (2913) Google Scholar and astute practicing physicians generally calculate risk scores before starting warfarin treatment for nonvalvular atrial fibrillation or statin therapy for primary prevention of coronary artery disease. Nevertheless, these exemplary successes serve to highlight the otherwise embarrassing landscape for prediction.One hope for optimism for the field of prognosis is a growing coalition of research methodologists who have come together to focus on the problems of prediction. The Prognosis Research Strategy (PROGRESS) group has outlined the methods that are used with predictive research in general and predictive models in particular; this group has identified a number of the ways to improve the impact of this work.8Steyerberg E.W. Moons K.G. van der Windt D.A. et al.Prognosis Research Strategy (PROGRESS) 3: prognostic model research.PLoS Med. 2013; 10: e1001381Crossref PubMed Scopus (721) Google Scholar And now, in an effort to harmonize the reporting of risk prediction models or instruments, the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) investigators have developed a set of recommendations for the optimal reporting of studies examining the development, validation, and updating of a prediction model.9Collins G.S. Reitsma J.B. Altman D.G. Moons K.G.M. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.Ann Intern Med. 2015; 162: 55-63Crossref PubMed Scopus (1150) Google ScholarThe TRIPOD investigators followed published guidance for the development of reporting guidelines and created a steering committee of experts in prediction modeling, who carried out a systematic search following the examples of the CONSORT and STROBE reporting guidelines, as well as others promoted by the EQUATOR Network. A checklist of 76 candidate items was developed and then tapered through web-based surveys and consensus reached over a 3-day meeting attended by 25 participant experts. A final checklist of 22 items was developed and, as shown in Table 1, recommends specific items that should be reported in manuscripts describing the development or validation of prediction models.9Collins G.S. Reitsma J.B. Altman D.G. Moons K.G.M. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.Ann Intern Med. 2015; 162: 55-63Crossref PubMed Scopus (1150) Google Scholar A detailed explanation and elaboration of the statement appears in the Annals of Internal Medicine.11Moons K.G.M. Altman D.G. Reitsma J.B. et al.Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): explanation and elaboration.Ann Intern Med. 2015; 162: W1-W73Crossref PubMed Scopus (1085) Google ScholarTable 1TRIPOD Checklist: Items to Report in Manuscripts Describing Development or Validation of Prediction ModelsAdapted from Collins et al.10Collins G.S. Reitsma J.B. Altman D.G. Moons K.G.M. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.BMC Med. 2015; 13: 1Crossref PubMed Scopus (446) Google ScholarSection/TopicNo.Development or Validation?Checklist ItemTitle and abstract Title1D; VIdentify the study as developing and/or validating a multivariable prediction model, the target population, and the outcome to be predicted. Abstract2D; VProvide a summary of objectives, study design, setting, participants, sample size, predictors, outcome, statistical analysis, results, and conclusions.Introduction Background & Objectives3aD; VExplain the medical context (including whether diagnostic or prognostic) and rationale for developing or validating the multivariable prediction model, including references to existing models.3bD; VSpecify the objectives, including whether the study describes the development or validation of the model, or both.Methods Source of data4aD; VDescribe the study design or source of data (eg, randomized trial, cohort, or registry data), separately for the development and validation datasets, if applicable.4bD; VSpecify the key study dates, including start of accrual; end of accrual; and, if applicable, end of follow-up. Participants5aD; VSpecify key elements of the study setting (eg, primary care, secondary care, general population) including number and location of centers.5bD; VDescribe eligibility criteria for participants.5cD; VGive details of treatments received, if relevant. Outcome6aD; VClearly define the outcome that is predicted by the prediction model, including how and when assessed.6bD; VReport any actions to blind assessment of the outcome to be predicted. Predictors7aD; VClearly define all predictors used in developing the multivariable prediction model, including how and when they were measured.7bD; VReport any actions to blind assessment of predictors for the outcome and other predictors. Sample size8D; VExplain how the study size was arrived at. Missing data9D; VDescribe how missing data were handled (eg, complete-case analysis, single imputation, multiple imputation) with details of any imputation method. Statistical analysis methods10aDDescribe how predictors were handled in the analyses.10bDSpecify type of model, all model-building procedures (including any predictor selection), and method for internal validation.10cVFor validation, describe how the predictions were calculated.10dD; VSpecify all measures used to assess model performance and, if relevant, to compare multiple models.10eVDescribe any model updating (eg, recalibration) arising from the validation, if done. Risk groups11D; VProvide details on how risk groups were created, if done. Development versus validation12VFor validation, identify any differences from the development data in setting, eligibility criteria, outcome, and predictors.Results Participants13aD; VDescribe the flow of participants through the study, including the number of participants with and without the outcome and, if applicable, a summary of the follow-up time. A diagram may be helpful.13bD; VDescribe the characteristics of the participants (basic demographics, clinical features, available predictors), including the number of participants with missing data for predictors and outcome.13cVFor validation, show a comparison with the development data of the distribution of important variables (demographics, predictors, and outcome). Model development14aDSpecify the number of participants and outcome events in each analysis.14bDIf done, report the unadjusted association between each candidate predictor and outcome. Model specification15aDPresent the full prediction model to allow predictions for individuals (ie, all regression coefficients, and model intercept or baseline survival at a given time point).15bDExplain how to the use the prediction model. Model performance16D; VReport performance measures (with confidence intervals) for the prediction model. Model updating17VIf done, report the result from any model updating (ie, model specification, model performance).Discussion Limitations18D; VDiscuss any limitations of the study (such as nonrepresentative sample, few events per predictor, missing data). Interpretation19aVFor validation, discuss the results with reference to performance in the development data, and any other validation data.19bD; VGive an overall interpretation of the results, considering objectives, limitations, results from similar studies and other relevant evidence. Implications20D; VDiscuss the potential clinical use of the model and implications for future research.Other information Supplementary information21D; VProvide information about the availability of supplementary resources, such as study protocol, web calculator, and datasets. Funding22D; VGive the source of funding and the role of the funders for the present study.Note: Items relevant only to the development of a prediction model are denoted by D, items relating solely to a validation of a prediction model are denoted by V, and items relating to both are denoted D; V. Open table in a new tab Beginning at the title and abstract stage, the TRIPOD statement recommends explicit mention of development or validation and a clear description of the predictors in the model and the outcome. Similar to the STROBE statement for reporting of observational studies, the TRIPOD statement recommends adequate description of the data source, eligibility criteria, sample size, and ascertainment of variables (predictors). In addition, details specific to prediction models, such as methods of assessing model performance, definition of risk groups, and a description of any updating/recalibration of the model, are recommended.Transparent reporting of these methods can help the clinician reader interpret how the model was built and can help the reviewer and journal editor make an assessment of the risk of bias in model development. As an example, models developed from highly selected populations, such as a clinical trial of overt diabetic nephropathy, may not be generalizable to the broader clinical population of patients with chronic kidney disease.12Keane W.F. Zhang Z. Lyle P.A. et al.Risk scores for predicting outcomes in patients with type 2 diabetes and nephropathy: the RENAAL Study.Clin J Am Soc Nephrol. 2006; 1: 761-767Crossref PubMed Scopus (157) Google Scholar Additional sources of bias in model development also can arise from inappropriate selection of candidate predictors (without considering face validity) and inaccurate ascertainment of the primary outcome (claims-based definitions vs independently adjudicated outcomes). Without transparent reporting of these methods, biased models that are unlikely to externally validate can be published and their findings can be applied incorrectly to clinical practice.With respect to the results section of a manuscript, the TRIPOD statement deviates from the STROBE statement and recommends that investigators report the performance characteristics of the model, as well as the full prediction model, to allow predictions for individuals and an explanation on how it should be used. Standard performance characteristics for discrimination such as the C statistic or the area under the receiver operating characteristic curve (AUROC) should be reported consistently, as well as measures of calibration including observed versus predicted probability graphs. Additional measures of discrimination, calibration, and reclassification also can be helpful in particular circumstances (such as model-to-model comparisons), but need to be interpreted carefully.13Steyerberg E. Clinical Prediction Models: A Practical Approach to Development, Validation and Updating. Springer, New York, NY2009Crossref Google Scholar Finally, formal evaluation of model performance in a decisional context using decision curve analysis can be helpful in determining the potential clinical utility,14Vickers A.J. Elkin E.B. Decision curve analysis: a novel method for evaluating prediction models.Med Decis Making. 2006; 26: 565-574Crossref PubMed Scopus (2254) Google Scholar but ultimately, trials assessing the clinical impact of model use are necessary and should be conducted.As part of the discussion of the prediction model, TRIPOD recommends an in-depth description of the limitations of the proposed model, as well its potential clinical uses. We believe that consideration of clinical utility is the critical first step in model development. Good understanding of the important clinical decisions a prediction is likely to inform helps to operationally define the target population (the patients for whom the model is intended for use and for whom the model should be developed) and the decisionally relevant outcome. For example, for patients with CKD stages 4 to 5, it is important to estimate when patients might be at risk for requiring dialysis for the purposes of planning dialysis access. Such a description would represent adequate reporting of the intended clinical utility. The adequacy of model performance in terms of discrimination and calibration needs to be judged in this context. Reporting conventional measures of discrimination and calibration allow this to be done informally (eg, C statistic > 0.8 for prediction of kidney failure might be interpreted as adequate discrimination) or more formally through decision curve analysis.15Tangri N. Kitsios G.D. Inker L.A. et al.Risk prediction models for patients with chronic kidney disease: a systematic review.Ann Intern Med. 2013; 158: 596-603Crossref PubMed Scopus (131) Google Scholar, 16Tangri N. Stevens L.A. Griffith J. et al.A predictive model for progression of chronic kidney disease to kidney failure.JAMA. 2011; 305: 1553-1559Crossref PubMed Scopus (695) Google Scholar Although defining risk categories/thresholds (eg, >20% risk of kidney failure at 1 year for dialysis access) can enhance knowledge translation and actionability, these thresholds should be understood for what they are: heuristics (associated with considerable uncertainty) designed to support informed decision making and aid the development of consensus for clinical guidelines.17Montori V.M. Brito J.P. Ting H.H. Patient-centered and practical application of new high cholesterol guidelines to prevent cardiovascular disease.JAMA. 2014; 311: 465-466Crossref PubMed Scopus (52) Google Scholar TRIPOD also lists reporting of funding sources and availability of additional aids such as smartphone calculators under the supplementary information section of the manuscript. While these decision aids often are included as optional online appendices in most journals, we endorse the inclusion of a usable risk calculator (in the form of a point score, nomogram, or computer-aided calculator) with the publication of every prediction model. Risk calculators are easy to build, encourage knowledge translation, and can greatly enhance the usability of the prediction model.15Tangri N. Kitsios G.D. Inker L.A. et al.Risk prediction models for patients with chronic kidney disease: a systematic review.Ann Intern Med. 2013; 158: 596-603Crossref PubMed Scopus (131) Google Scholar The widespread use of mobile apps such as MedCalc and QxCalculate are evidence that portals that provide access to multiple prediction models can further enhance usability.In summary, the TRIPOD statement separates the reporting of prediction model manuscripts from traditional risk factor–based cohort studies and will greatly enhance the clarity of reporting for these studies. These recommendations should be adopted by investigators, reviewers, and editorial boards. Of note, with the publication of this editorial, AJKD’s Information for Authors & Editorial Policies has been updated to include prediction models as a subtype of Original Investigation, to require that authors follow the TRIPOD statement, and to list the appropriate structured abstract headings to be used in AJKD articles of this type. To be sure, this fastidious attention to reporting standards should not be confused with solving the major methodological issues that remain with clinical prediction. It also is true that not all the recommendations are of equal importance and some may quibble about particular omissions, but it is difficult to doubt the rigor of the process used to develop the TRIPOD guidance and it is important to acknowledge that the development of consensus itself marks a crucial milestone for the field.No doubt, broader integration of prediction models into clinical practice will require many more changes, including greater attention to the decisional context that determines the opportunity for clinical impact; routine independent validation of models on multiple different data sources; methods of automatic updating and, particularly, recalibration in different settings; ready and intuitive access to useable models (eg, through a single online portal or smartphone app); a greater expectation for empirically testing impact; and a shift in the current mental paradigm that recommends a one-size-fits all approach to evidence-based medicine. Nevertheless, we strongly endorse the TRIPOD statement as a necessary and important step in the modernization of a field sorely in need of standards. From the perspective of research methodologists, it is not especially surprising that physicians perform poorly in prognostication.1Gripp S. Moeller S. Bolke E. et al.Survival prediction in terminally ill cancer patients by clinical estimates, laboratory tests, and self-rated anxiety and depression.J Clin Oncol. 2007; 25: 3313-3320Crossref PubMed Scopus (225) Google Scholar, 2Pontin D. Jordan N. Issues in prognostication for hospital specialist palliative care doctors and nurses: a qualitative inquiry.Palliat Med. 2013; 27: 165-171Crossref PubMed Scopus (20) Google Scholar A given physician typically sees a limited number of specific index cases, outcome ascertainment in routine care usually is extremely haphazard, and the clinician is handicapped by a formidable array of human cognitive biases. It is no wonder that prognostication is an activity often avoided by physicians. With the advent of easy-to-use statistical software and the increasing availability of large databases, literally thousands of predictive models have been published in the medical literature, each with the promise of addressing some important clinical problem. One might expect to find these enthusiastically adopted into routine clinical care. Nevertheless, clinical predictive models follow the familiar (now predictable) yet depressing pattern of translation: many are made, few are validated, and almost none are used. For those of us convinced on principle that prognosis is a critical part of clinical care and that accurate prognostication should guide clinical decision making and therapeutic choice,3Kent D.M. Hayward R.A. Limitations of applying summary results of clinical trials to individual patients: the need for risk stratification.JAMA. 2007; 298: 1209-1212Crossref PubMed Scopus (382) Google Scholar this remains a remarkably stubborn and curious observation. However, important counterexamples have begun to emerge. Contemporary cardiovascular and cerebrovascular clinical practice guidelines, for instance, advocate and incorporate the use of risk prediction tools to guide clinical decisions,4Pearson T.A. Blair S.N. Daniels S.R. et al.AHA Guidelines for Primary Prevention of Cardiovascular Disease and Stroke: 2002 update: Consensus panel guide to comprehensive risk reduction for adult patients without coronary or other atherosclerotic vascular diseases. American Heart Association Science Advisory and Coordinating Committee.Circulation. 2002; 106: 388-391Crossref PubMed Scopus (1610) Google Scholar, 5Stone N.J. Robinson J.G. Lichtenstein A.H. et al.2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.J Am Coll Cardiol. 2014; 63: 2889-2934Crossref PubMed Scopus (3230) Google Scholar, 6Goff Jr., D.C. Lloyd-Jones D.M. Bennett G. et al.2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.J Am Coll Cardiol. 2014; 63: 2935-2959Crossref PubMed Scopus (1819) Google Scholar, 7Kernan W.N. Ovbiagele B. Black H.R. et al.Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association.Stroke. 2014; 45: 2160-2236Crossref PubMed Scopus (2913) Google Scholar and astute practicing physicians generally calculate risk scores before starting warfarin treatment for nonvalvular atrial fibrillation or statin therapy for primary prevention of coronary artery disease. Nevertheless, these exemplary successes serve to highlight the otherwise embarrassing landscape for prediction. One hope for optimism for the field of prognosis is a growing coalition of research methodologists who have come together to focus on the problems of prediction. The Prognosis Research Strategy (PROGRESS) group has outlined the methods that are used with predictive research in general and predictive models in particular; this group has identified a number of the ways to improve the impact of this work.8Steyerberg E.W. Moons K.G. van der Windt D.A. et al.Prognosis Research Strategy (PROGRESS) 3: prognostic model research.PLoS Med. 2013; 10: e1001381Crossref PubMed Scopus (721) Google Scholar And now, in an effort to harmonize the reporting of risk prediction models or instruments, the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) investigators have developed a set of recommendations for the optimal reporting of studies examining the development, validation, and updating of a prediction model.9Collins G.S. Reitsma J.B. Altman D.G. Moons K.G.M. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.Ann Intern Med. 2015; 162: 55-63Crossref PubMed Scopus (1150) Google Scholar The TRIPOD investigators followed published guidance for the development of reporting guidelines and created a steering committee of experts in prediction modeling, who carried out a systematic search following the examples of the CONSORT and STROBE reporting guidelines, as well as others promoted by the EQUATOR Network. A checklist of 76 candidate items was developed and then tapered through web-based surveys and consensus reached over a 3-day meeting attended by 25 participant experts. A final checklist of 22 items was developed and, as shown in Table 1, recommends specific items that should be reported in manuscripts describing the development or validation of prediction models.9Collins G.S. Reitsma J.B. Altman D.G. Moons K.G.M. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement.Ann Intern Med. 2015; 162: 55-63Crossref PubMed Scopus (1150) Google Scholar A detailed explanation and elaboration of the statement appears in the Annals of Internal Medicine.11Moons K.G.M. Altman D.G. Reitsma J.B. et al.Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): explanation and elaboration.Ann Intern Med. 2015; 162: W1-W73Crossref PubMed Scopus (1085) Google Scholar Note: Items relevant only to the development of a prediction model are denoted by D, items relating solely to a validation of a prediction model are denoted by V, and items relating to both are denoted D; V. Beginning at the title and abstract stage, the TRIPOD statement recommends explicit mention of development or validation and a clear description of the predictors in the model and the outcome. Similar to the STROBE statement for reporting of observational studies, the TRIPOD statement recommends adequate description of the data source, eligibility criteria, sample size, and ascertainment of variables (predictors). In addition, details specific to prediction models, such as methods of assessing model performance, definition of risk groups, and a description of any updating/recalibration of the model, are recommended. Transparent reporting of these methods can help the clinician reader interpret how the model was built and can help the reviewer and journal editor make an assessment of the risk of bias in model development. As an example, models developed from highly selected populations, such as a clinical trial of overt diabetic nephropathy, may not be generalizable to the broader clinical population of patients with chronic kidney disease.12Keane W.F. Zhang Z. Lyle P.A. et al.Risk scores for predicting outcomes in patients with type 2 diabetes and nephropathy: the RENAAL Study.Clin J Am Soc Nephrol. 2006; 1: 761-767Crossref PubMed Scopus (157) Google Scholar Additional sources of bias in model development also can arise from inappropriate selection of candidate predictors (without considering face validity) and inaccurate ascertainment of the primary outcome (claims-based definitions vs independently adjudicated outcomes). Without transparent reporting of these methods, biased models that are unlikely to externally validate can be published and their findings can be applied incorrectly to clinical practice. With respect to the results section of a manuscript, the TRIPOD statement deviates from the STROBE statement and recommends that investigators report the performance characteristics of the model, as well as the full prediction model, to allow predictions for individuals and an explanation on how it should be used. Standard performance characteristics for discrimination such as the C statistic or the area under the receiver operating characteristic curve (AUROC) should be reported consistently, as well as measures of calibration including observed versus predicted probability graphs. Additional measures of discrimination, calibration, and reclassification also can be helpful in particular circumstances (such as model-to-model comparisons), but need to be interpreted carefully.13Steyerberg E. Clinical Prediction Models: A Practical Approach to Development, Validation and Updating. Springer, New York, NY2009Crossref Google Scholar Finally, formal evaluation of model performance in a decisional context using decision curve analysis can be helpful in determining the potential clinical utility,14Vickers A.J. Elkin E.B. Decision curve analysis: a novel method for evaluating prediction models.Med Decis Making. 2006; 26: 565-574Crossref PubMed Scopus (2254) Google Scholar but ultimately, trials assessing the clinical impact of model use are necessary and should be conducted. As part of the discussion of the prediction model, TRIPOD recommends an in-depth description of the limitations of the proposed model, as well its potential clinical uses. We believe that consideration of clinical utility is the critical first step in model development. Good understanding of the important clinical decisions a prediction is likely to inform helps to operationally define the target population (the patients for whom the model is intended for use and for whom the model should be developed) and the decisionally relevant outcome. For example, for patients with CKD stages 4 to 5, it is important to estimate when patients might be at risk for requiring dialysis for the purposes of planning dialysis access. Such a description would represent adequate reporting of the intended clinical utility. The adequacy of model performance in terms of discrimination and calibration needs to be judged in this context. Reporting conventional measures of discrimination and calibration allow this to be done informally (eg, C statistic > 0.8 for prediction of kidney failure might be interpreted as adequate discrimination) or more formally through decision curve analysis.15Tangri N. Kitsios G.D. Inker L.A. et al.Risk prediction models for patients with chronic kidney disease: a systematic review.Ann Intern Med. 2013; 158: 596-603Crossref PubMed Scopus (131) Google Scholar, 16Tangri N. Stevens L.A. Griffith J. et al.A predictive model for progression of chronic kidney disease to kidney failure.JAMA. 2011; 305: 1553-1559Crossref PubMed Scopus (695) Google Scholar Although defining risk categories/thresholds (eg, >20% risk of kidney failure at 1 year for dialysis access) can enhance knowledge translation and actionability, these thresholds should be understood for what they are: heuristics (associated with considerable uncertainty) designed to support informed decision making and aid the development of consensus for clinical guidelines.17Montori V.M. Brito J.P. Ting H.H. Patient-centered and practical application of new high cholesterol guidelines to prevent cardiovascular disease.JAMA. 2014; 311: 465-466Crossref PubMed Scopus (52) Google Scholar TRIPOD also lists reporting of funding sources and availability of additional aids such as smartphone calculators under the supplementary information section of the manuscript. While these decision aids often are included as optional online appendices in most journals, we endorse the inclusion of a usable risk calculator (in the form of a point score, nomogram, or computer-aided calculator) with the publication of every prediction model. Risk calculators are easy to build, encourage knowledge translation, and can greatly enhance the usability of the prediction model.15Tangri N. Kitsios G.D. Inker L.A. et al.Risk prediction models for patients with chronic kidney disease: a systematic review.Ann Intern Med. 2013; 158: 596-603Crossref PubMed Scopus (131) Google Scholar The widespread use of mobile apps such as MedCalc and QxCalculate are evidence that portals that provide access to multiple prediction models can further enhance usability. In summary, the TRIPOD statement separates the reporting of prediction model manuscripts from traditional risk factor–based cohort studies and will greatly enhance the clarity of reporting for these studies. These recommendations should be adopted by investigators, reviewers, and editorial boards. Of note, with the publication of this editorial, AJKD’s Information for Authors & Editorial Policies has been updated to include prediction models as a subtype of Original Investigation, to require that authors follow the TRIPOD statement, and to list the appropriate structured abstract headings to be used in AJKD articles of this type. To be sure, this fastidious attention to reporting standards should not be confused with solving the major methodological issues that remain with clinical prediction. It also is true that not all the recommendations are of equal importance and some may quibble about particular omissions, but it is difficult to doubt the rigor of the process used to develop the TRIPOD guidance and it is important to acknowledge that the development of consensus itself marks a crucial milestone for the field. No doubt, broader integration of prediction models into clinical practice will require many more changes, including greater attention to the decisional context that determines the opportunity for clinical impact; routine independent validation of models on multiple different data sources; methods of automatic updating and, particularly, recalibration in different settings; ready and intuitive access to useable models (eg, through a single online portal or smartphone app); a greater expectation for empirically testing impact; and a shift in the current mental paradigm that recommends a one-size-fits all approach to evidence-based medicine. Nevertheless, we strongly endorse the TRIPOD statement as a necessary and important step in the modernization of a field sorely in need of standards. Support: Dr Tangri is supported by the MHRC Establishment Grant and the KRESCENT New Investigator Award, a joint initiative of the Canadian Institute of Health Research, the Canadian Society of Nephrology, and the Kidney Foundation of Canada. Dr Kent is supported by a Patient-Centered Outcome Research Institute (PCORI) Methodology grant ( 1IP2PI000722 ) and a National Institute of Neurological Disorders and Stroke grant ( U01 NS086294 ). Financial Disclosure: The authors declare that they have no relevant financial interests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call