Abstract

Background/Aims: Calculation of correlation coefficients is often undertaken as a way of comparing different cognitive screening instruments (CSIs). However, test scores may correlate but not agree, and high correlation may mask lack of agreement between scores. The aim of this study was to use the methodology of Bland and Altman to calculate limits of agreement between the scores of selected CSIs and contrast the findings with Pearson's product moment correlation coefficients between the test scores of the same instruments. Methods: Datasets from three pragmatic diagnostic accuracy studies which examined the Mini-Mental State Examination (MMSE) vs. the Montreal Cognitive Assessment (MoCA), the MMSE vs. the Mini-Addenbrooke's Cognitive Examination (M-ACE), and the M-ACE vs. the MoCA were analysed to calculate correlation coefficients and limits of agreement between test scores. Results: Although test scores were highly correlated (all >0.8), calculated limits of agreement were broad (all >10 points), and in one case, MMSE vs. M-ACE, was >15 points. Conclusion: Correlation is not agreement. Highly correlated test scores may conceal broad limits of agreement, consistent with the different emphases of different tests with respect to the cognitive domains examined. Routine incorporation of limits of agreement into diagnostic accuracy studies which compare different tests merits consideration, to enable clinicians to judge whether or not their agreement is close.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call