Abstract
42 Background: As value-based payment models for cancer care expand, the need for measures which reliably assess the quality of care provided increases. This is especially true for models like the Oncology Care Model (OCM) that rely on quality rankings to determine potential shared savings. Under models like these, unreliable measures may result in arbitrary application of value-based payments. The goal of this project is to evaluate the extent to which measures used within the OCM are reliable indicators of provider performance. Methods: Data for this project came from North Carolina Medicare claims from 2015-2017. Episodes were attributed to physician practices at the tax identification number (TIN) level, lasted 6 months, and were divided into two performance years beginning 1/1/2016 and 7/1/2016. TINs with fewer than 20 attributed patients were excluded. Three claims-based OCM measures were used in this evaluation: 1) proportion of episodes with all-cause hospital admissions; 2) proportion of episodes with all-cause emergency department (ED) visits or observation stays; and 3) proportion of patients that died who were admitted to hospice for 3 days or more. Risk adjustment followed the method described by measure specifications from the OCM. Reliability was calculated as the ratio of between practice variation (e.g. signal) to the sum of between practice variation and within practice variation (e.g. noise). Variance estimates were derived from hierarchical logistic regression models used for risk adjustment. Results: For the hospitalization and ED visit measures, episode counts for years 1 and 2 were 30,746 and 28,430 and TIN counts were 86 and 84, respectively. Hospice use measures had fewer episodes (2,677 and 2,428) and TINs (36 and 33). Across all measures, median reliability scores failed to achieve the recommended 0.7 threshold and only hospice had a median reliability score above 0.5 (Table). Conclusions: These findings suggest claims-based measures included in the OCM may produce imprecise estimates of provider performance and are vulnerable to random variation. Consideration should be given to developing alternative measures which may be more reliable estimates of provider performance and to increasing minimum denominator requirements for existing measures.[Table: see text]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.