Abstract

Medical specialists aim to provide evidence-based care based on the most recent scientific insights, but with the ongoing expansion of medical literature it seems unfeasible to remain updated. "Black-box" decision support tools such as Watson for Oncology (Watson) are gaining attention as they offer a promising opportunity to conquer this challenging issue, but it is not known if the advice given is congruent with guidelines or clinically valid in other settings. We present a protocol for the content evaluation of black-box decision support tools and a feasibility study to test the content and usability of Watson using this protocol. The protocol consists of developing synthetic patient cases based on Dutch guidelines and expert opinion, entering the synthetic cases into Watson and Oncoguide, noting the response of each system and evaluating the result using a cross-tabulation scoring system resulting in a score range of -12 to +12. Treatment options that were not recommended according to the Dutch guideline were labeled with a "red flag" if Watson recommended it, and an "orange flag" if Watson suggested it for consideration. To test the feasibility of applying the protocol, we developed synthetic patient cases for the adjuvant treatment of stage I to stage III colon cancer based on relevant patient, clinical and tumor characteristics and followed our protocol. Additionally, for the feasibility study we also compared the recommendations from the NCCN guideline with Watson's advice, and evaluated usability by a cognitive walkthrough method. In total, we developed 190 synthetic patient cases (stage I: n=8; stage II: n=110; and stage III: n=72). Overall concordance scores per case for Watson versus Oncoguide ranged from a minimum score of -4 (n=6) to a maximum score of+12 (n=17) and from -4 (n=9) to +12 (n=24) for Watson versus the NCCN guidelines). In total, 69 cases (36%) were labeled with red flags, 96 cases (51%) with orange flags and 25 cases (13%) without flags. For the comparison of Watson with the NCCN guidelines, no red or orange flags were identified. We developed a research protocol for the evaluation of a black-box decision support tool, which proved useful and usable in testing the content and usability of Watson. Overall concordance scores ranged considerably between synthetic cases for both comparisons between Watson versus Oncoguide and Watson versus NCCN. Non-concordance is partially attributable to guideline differences between the United States and The Netherlands. This implies that further adjustments and localization are required before implementation of Watson outside the United States. This study describes the first steps of content evaluation of a decision support tool before implementation in daily oncological patient care. The ultimate goal of the incorporation of decision support tools in daily practice is to improve personalized medicine and quality of care.

Highlights

  • Medical specialists do their best to provide high-quality, evidence-based care based on the latest scientific insights, but it is very difficult to keep up with the increasing amount of medical literature in combination with the time demands of daily patient care

  • The protocol consists of developing synthetic patient cases based on Dutch guidelines and expert opinion, entering the synthetic cases into Watson and Oncoguide, noting the response of each system and evaluating the result using a cross-tabulation scoring system resulting in a score range of −12 to +12

  • Treatment options that were not recommended according to the Dutch guideline were labeled with a ”red flag” if Watson recommended it, and an ”orange flag” if Watson suggested it for consideration

Read more

Summary

Introduction

Medical specialists do their best to provide high-quality, evidence-based care based on the latest scientific insights, but it is very difficult to keep up with the increasing amount of medical literature in combination with the time demands of daily patient care. Digital decision support tools include both simple algorithms such as flowcharts and decision trees, and more complex systems that use artificial intelligence to provide personalized treatment advice The latter are considered best for improving clinician performance, but these tools should be evaluated before being implemented in routine daily practice, and usability of the systems remains variable [1]. Its selection of treatments is refined using machine learning, trained by specialists from New York’s Memorial Sloan Kettering Cancer Center [5] This combination of technologies has the potential to solve two major problems in the field of decision support: harnessing data from poorly-structured medical record data and keeping the medical knowledge base of the system upto-date [6]. The ultimate goal of the incorporation of decision support tools in daily practice is to improve personalized medicine and quality of care

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call