Abstract
Artificial intelligence-based medical systems can by now diagnose various disorders highly accurately. However, we should stress that despite encouraging and ever improving results, people still distrust such systems. We review relevant publications over the past five years, to identify the main causes of such mistrust and ways to overcome it. Our study showes that the main reasons to distrust these systems are opaque models, blackbox algorithms, and potentially unrepresentful training samples. We demonstrate that explainable artificial intelligence, aimed to create more user-friendly and understandable systems, has become a noticeable new topic in theoretical research and practical development. Another notable trend is to develop approaches to build hybrid systems, where artificial and human intelligence interact according to the teamwork model.
Full Text
Topics from this Paper
Explainable Artificial Intelligence
Artificial Intelligence
Human Intelligence
Medical Diagnostic Systems
Opaque Models
+ Show 5 more
Create a personalized feed of these topics
Get StartedTalk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
May 8, 2021
Knowledge Based Systems
Apr 22, 2021
Feb 16, 2021
Clinical and experimental emergency medicine
Nov 28, 2023
Medical Physics
Dec 7, 2021
Mar 3, 2021
Oct 30, 2020
Oct 13, 2020
Ethical Theory and Moral Practice
May 26, 2023
Social Science Research Network
Apr 12, 2021
Fertility and Sterility
Nov 1, 2020
Patterns
Sep 1, 2021