Abstract

4 Background: Identifying reliable measures that distinguish care quality at the medical oncology practice level is crucial to ensuring delivery of high-value cancer care, particularly as alternative payment models in oncology become more frequent. We assessed reliability of several claims-based quality measures across oncology practices. Methods: Using 100% Medicare claims data for fee-for-service beneficiaries with cancer, we identified 6-month chemotherapy episodes starting in four 6-month performance periods (PPs) from July 2017 to June 2019. We assessed quality measures of acute care utilization among all episodes and end-of-life (EOL) care among decedents who died during or by 90 days of the episode’s end. We estimated practice-level adjusted rates from hierarchical linear models with practice-level random effects, PP fixed effects, and clinical/demographic controls. We documented intraclass correlation (ICC, variation attributed to practice) and calculated reliability (reproducibility) for the most recent 6-month PP for each measure from the between-measure variance, within-measure variance, and number of episodes per practice, excluding practices with <20 episodes. We considered reliability ≥70% (ie, <30% of variation in practices’ performance due to chance instead of true quality differences) to be adequate. Results: Among 443,865 patients from 2,307 practices, 90% were >65 years old, 84% were White, 31% had lung or breast cancer. The median (IQR) number of patients in practices with ≥20 episodes was 59 (17-195). Most ICCs were low, suggesting limited variation across practices (Table). All the utilization and EOL measures had a practice-level reliability of <70% for the average-sized practice (Table). Most measures demonstrated little variation over time (<2 percentage point difference over 4 PPs). Conclusions: None of the measures studied were reliable for average-sized practices, suggesting limited ability to distinguish care quality across practices treating fee-for-service Medicare patients within a single PP, except among larger practices. Several measures would be reliable for many practices over an evaluation period of 1-2 years, with a tradeoff of using less current data to monitor performance. [Table: see text]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.