Abstract

BackgroundDespite increasing interest in how conversational agents might improve healthcare delivery and information dissemination, there is limited research assessing the quality of health information provided by these technologies, especially in orthognathic surgery (OGS). PurposeThis study aimed to measure and compare the quality of four virtual assistants (VA) in addressing the frequently asked questions about OGS. Study design, setting, and sampleThis in-silico cross-sectional study, assessed the responses of a sample of four VAs through a standardized set of 10 questionnaires related to OGS. Independent variableThe independent variables were the four VAs. The four VAs tested were VA1: Alexa (Seattle, Washington), VA2: Google Assistant (Google Mountain View, California), VA3: Siri (Cupertino, California), and VA4: Bing (San Diego, California). Main outcome variablesThe primary outcome variable was the quality of the answers generated by the four VAs. Four investigators (two orthodontists and two oral surgeons) assessed the quality of response of the four VAs through a standardized set of 10 questionnaires using a five-point modified Likert scale, with the lowest score (1) signifying the highest quality. The main outcome variables measured were the combined mean scores of the responses from each VA, and the secondary outcome assessed was the variability in responses among the different investigators. CovariatesNone AnalysisOne-way analysis of variance (ANOVA) was done to compare the average scores per question. One-way ANOVA followed by Tukey's post-hoc analyses was done to compare the combined mean scores among the VAs and the combined mean scores of all questions were evaluated to determine variability if any amongst different VA’s responses to the investigators. ResultsAmong the four VAs, VA4 (1.32 ± 0.57) had significantly the lowest (best) score, followed by VA2 (1.55±0.78), VA1 (2.67±1.49), and VA3 (3.52±0.50) (p-value<0.001). There were no significant differences in how the VAs: VA3 (p-value = 0.46), VA4 (p-value = 0.45), and VA2 (p-value = 0.44) responded to each of the investigators except VA1 (p-value = 0.003). ConclusionsThe VAs responded to the queries related to OGS, with VA4 displaying the best quality response, followed by VA2, VA1, and VA3. Technology companies and clinical organizations should partner for an intelligent VA with evidence-based responses specifically curated to educate patients.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call