Abstract

Artificial intelligence-based medical systems can by now diagnose various disorders highly accurately. However, we should stress that despite encouraging and ever improving results, people still distrust such systems. We review relevant publications over the past five years, to identify the main causes of such mistrust and ways to overcome it. Our study showes that the main reasons to distrust these systems are opaque models, blackbox algorithms, and potentially unrepresentful training samples. We demonstrate that explainable artificial intelligence, aimed to create more user-friendly and understandable systems, has become a noticeable new topic in theoretical research and practical development. Another notable trend is to develop approaches to build hybrid systems, where artificial and human intelligence interact according to the teamwork model.

Full Text

Published Version
Open DOI Link

Get access to 115M+ research papers

Discover from 40M+ Open access, 2M+ Pre-prints, 9.5M Topics and 32K+ Journals.

Sign Up Now! It's FREE

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call