BackgroundDuring the COVID-19 pandemic, medical laypersons with symptoms indicative of a COVID-19 infection commonly sought guidance on whether and where to find medical care. Numerous web-based decision support tools (DSTs) have been developed, both by public and commercial stakeholders, to assist their decision making. Though most of the DSTs’ underlying algorithms are similar and simple decision trees, their mode of presentation differs: some DSTs present a static flowchart, while others are designed as a conversational agent, guiding the user through the decision tree’s nodes step-by-step in an interactive manner.ObjectiveThis study aims to investigate whether interactive DSTs provide greater decision support than noninteractive (ie, static) flowcharts.MethodsWe developed mock interfaces for 2 DSTs (1 static, 1 interactive), mimicking patient-facing, freely available DSTs for COVID-19-related self-assessment. Their underlying algorithm was identical and based on the Centers for Disease Control and Prevention’s guidelines. We recruited adult US residents online in November 2020. Participants appraised the appropriate social and care-seeking behavior for 7 fictitious descriptions of patients (case vignettes). Participants in the experimental groups received either the static or the interactive mock DST as support, while the control group appraised the case vignettes unsupported. We determined participants’ accuracy, decision certainty (after deciding), and mental effort to measure the quality of decision support. Participants’ ratings of the DSTs’ usefulness, ease of use, trust, and future intention to use the tools served as measures to analyze differences in participants’ perception of the tools. We used ANOVAs and t tests to assess statistical significance.ResultsOur survey yielded 196 responses. The mean number of correct assessments was higher in the intervention groups (interactive DST group: mean 11.71, SD 2.37; static DST group: mean 11.45, SD 2.48) than in the control group (mean 10.17, SD 2.00). Decisional certainty was significantly higher in the experimental groups (interactive DST group: mean 80.7%, SD 14.1%; static DST group: mean 80.5%, SD 15.8%) compared to the control group (mean 65.8%, SD 20.8%). The differences in these measures proved statistically significant in t tests comparing each intervention group with the control group (P<.001 for all 4 t tests). ANOVA detected no significant differences regarding mental effort between the 3 study groups. Differences between the 2 intervention groups were of small effect sizes and nonsignificant for all 3 measures of the quality of decision support and most measures of participants’ perception of the DSTs.ConclusionsWhen the decision space is limited, as is the case in common COVID-19 self-assessment DSTs, static flowcharts might prove as beneficial in enhancing decision quality as interactive tools. Given that static flowcharts reveal the underlying decision algorithm more transparently and require less effort to develop, they might prove more efficient in providing guidance to the public. Further research should validate our findings on different use cases, elaborate on the trade-off between transparency and convenience in DSTs, and investigate whether subgroups of users benefit more with 1 type of user interface than the other.Trial RegistrationDeutsches Register Klinischer Studien DRKS00028136; https://tinyurl.com/4bcfausx (retrospectively registered)
Read full abstract