Abstract

In recent years, artificial intelligence and machine learning has proved to be remarkable in the medical field. The medical sector, however, requires a high level of accountability and thus transparency. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of deep learning is still unresolved, and many machine decisions are still poorly understood. The reason radiologists are weary of using AI is because they do not trust a model to predict ailments without any form of explainability. Thus, we aim to create a system that not only focuses on interpretability and explainability but also has a high enough accuracy to make it reliable enough to be trusted and used by medical practitioners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call