Abstract

The most popular metric for interrater reliability in electroencephalography is the kappa (κ) score. κ calculation is laborious, requiring EEG readers to read the same EEG studies. We introduce a method to determine the best-case κ score (κBEST) for measuring interrater reliability between EEG readers, retrospectively. We incorporated 1 year of EEG reports read by four adult EEG readers at our institution. We used SQL queries to determine EEG findings for subsequent analysis. We generated logistic regression models for particular EEG findings, dependent on patient age, location acuity, and EEG reader. We derived a novel measure, the κBEST statistic, from the logistic regression coefficients. Increasing patient age and location acuity were associated with decreased sleep and increased diffuse abnormalities. For certain findings, EEG readers exhibited the dominant influence, manifesting directly as lower between-reader κBEST scores for certain EEG findings. Within-reader κBEST control scores were higher than between-reader scores, suggesting internal consistency. The κBEST metric can measure significant interrater reliability differences between any number of EEG readers and reports, retrospectively, and is generalizable to other domains (e.g., pathology or radiology reporting). We suggest using this metric as a guide or starting point for focused quality control efforts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.