Abstract

Prior studies reported on many machine learning (ML) projects that under-performed. What steps can leaders take during ML pilot projects to identify and mitigate project risks and systems risks, before implementing new ML systems at scale? We report on an exploratory case study of a U.S.-based healthcare provider organization’s ML pilot project, undertaken when a software vendor proposed an automated solution that would combine natural language processing (NLP) and ML, to improve medical claims coding quality. We reveal tactics the client took during the pilot project, to spot and limit risks that could ultimately harm the firm, its healthcare providers, and its patients. We conclude with suggestions for further research on responsible ML.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call