Abstract

Background/Aims: We aimed to compare the Mini-Mental State Examination (MMSE) with the Mini-Cog, measuring agreement in participants’ classification, using a general population sample. Methods: Cross-sectional evaluation of 609 community dwellers aged ≥60 years was performed by trained interviewers. Cohen’s kappa and 95% confidence intervals (CI) were calculated to assess overall agreement, and Cronbach alphas computed to assess reliability. Two-parameter Item Response Theory models (difficulty and discrimination parameters) were used to assess discrimination. Results: Considering MMSE cut-point for scores <24, 3.1% of the participants would be ‘cognitive impaired’ and 6.2% considering cut-point scores <25. Following Mini-Cog’s cut-point score <3, 11.3% would be impaired. For MMSE cut-point <24 and Mini-Cog <3, we observed a Cohen’s kappa of 0.116 (95% CI: –0.073 to 0.305), and of 0.258 (95% CI: 0.101–0.415) for cut-point <25. The highest kappa was obtained for cut-point <26 on the MMSE and Mini-Cog <3 (kappa = 0.413). MMSE Cronbach alpha was 0.6108 and Mini-Cog’s alpha was 0.2776. Cocalibration according to inherent ability is graphically presented. Conclusions: Agreement between scales seems fragile in our sample. The discriminative and reliability analysis suggests a better performance for subsets of the MMSE compared with the Mini-Cog. Usefulness of calibrated scores is discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.