Abstract

Cancer risk algorithms were introduced to clinical practice in the last decade, but they remain underused. In two randomised controlled experiments, we tested the impact of an unnamed cancer risk algorithm (QCancer) on GPs' risk assessment and 2-week-wait referral decisions. We also tested the impact of algorithm information, 'social proof', and a visual explanation. We presented two different samples of GPs (total n=372) with vignettes depicting patients with possible colorectal (Experiment 1) or upper GI (Experiment 2) cancers and measured their risk estimates and inclination to refer both before and after seeing the algorithmic estimate. In Experiment 1, half of the participants read information about the algorithm. In Experiment 2, half of the participants read how Experiment 1 participants had found the algorithm useful ('social proof'). Half of the participants also saw an explanatory bar graph representing the relative contribution of symptoms to the risk estimate. Both experiments provided consistent results: after seeing the algorithm, GPs' inclination to refer changed on 26% of instances. 'Social proof' enhanced the algorithm's impact on both risk estimates and referrals. Neither information about the algorithm nor the explanatory graph impacted behaviour. In both experiments, learning took place, as GPs' initial risk estimates moved closer to QCancer over time. Cancer risk algorithms have the potential to impact risk assessment and decision making and may have a role as learning tools. Informing clinicians about their proven usefulness to colleagues may maximise impact.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call