Abstract
The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made solely by the physician, but to a significant extent by a machine using algorithms, decisions become nontransparent. Skill learning is the most common application of machine learning algorithms in clinical decision making. These are a class of very general algorithms (artificial neural networks, classifiers, etc.), which are tuned based on examples to optimize the classification of new, unseen cases. It is pointless to ask for an explanation for a decision. A detailed understanding of the mathematical details of an AI algorithm may be possible for experts in statistics or computer science. However, when it comes to the fate of human beings, this “developer’s explanation” is not sufficient. The concept of explainable AI (XAI) as a solution to this problem is attracting increasing scientific and regulatory interest. This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field.
Highlights
The terms artificial intelligence and machine learning are sometimes used interchangeably, this is incorrect
Artificial intelligence is a branch of computer science that deals with the automation of human activities that are normally considered intelligent human behavior [1]
The present report emphasized the need for comprehensibility of artificial intelligence (AI)-based biomedical decisions, it should not be ignored that it falls short to require interpretability only from statisticians involved in the medical decision-making process
Summary
The terms artificial intelligence and machine learning are sometimes used interchangeably, this is incorrect. Machine learning is currently by far the most popular method used in artificial intelligence and can be referred to in two different forms [2]: first, approaches in which a class of very general algorithms (artificial neural networks, classifiers, predictors, associative memories, etc.) are tuned based on examples to optimize the prediction or classification of new, unseen cases. This is the deduction of knowledge from data [3,4]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.