Abstract

Efforts to establish support for the reliability of quality indicator data are ongoing. Most patients typically receive recommended care, therefore, the high-prevalence of event rates make statistical analysis challenging. This article presents a novel statistical approach recently used to estimate inter-rater agreement for the National Database for Nursing Quality Indicator pressure injury risk and prevention data. Inter-rater agreement was estimated by prevalence-adjusted kappa values. Data modifications were also done to overcome the convergence issue due to sparse cross-tables. Cohen's kappa values suggested low reliability despite high levels of agreement between raters. Prevalence-adjusted kappa values should be presented with Cohen's kappa values in order to evaluate inter-rater agreement when the majority of patients receive recommended care.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.