Abstract

IntroductionPD subtype classification systems attempt to address heterogeneity in PD, a widely recognized feature of the disease with implications in prognosis and therapeutic development. There is no consensus on a valid PD subtype classification system, and its use in clinical research is sparse. Reproducibility has not been systematically assessed as a step for the validation of a PD subtype classification system. We aimed at assessing reproducibility of previously published data-driven PD subtype classification systems in a well-characterized cohort created for clinical research purposes, the Longitudinal and Biomarker Study in Parkinson's Disease (LABS-PD). MethodsWe identified all published studies of data-driven PD subtype classification systems and included those with variables that conceptually matched the variables available in LABS-PD. We reproduced the cluster analyses of the included studies in LABS-PD. Reproducibility was determined by a panel of experts using a modified Delphi consensus process. ResultsWe included eight studies of data-driven PD subtype classification systems and completed the replication in LABS-PD of the analyses conducted in each original study. After two iterations of the modified Delphi consensus process, no study was reproducible in LABS-PD. ConclusionsCurrently published data-driven PD subtype classification systems lack reproducibility in a well-characterized cohort of patients initially recruited for a clinical trial of a disease-modifying intervention. The results raise concerns about the utility of the widely-discussed concept of data-driven PD subtypes. This gap is a barrier for a meaningful use of PD subtypes and calls for the establishment of standards for the validation and use of these subtype classification systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call