Abstract

In medical practice, all decisions, as for example the diagnosis based on the classification of images, must be made reliably and effectively. The possibility of having automatic tools helping doctors in performing these important decisions is highly welcome. Artificial Intelligence techniques, and in particular Deep Learning methods, have proven very effective on these tasks, with excellent performance in terms of classification accuracy. The problem with such methods is that they represent black boxes, so they do not provide users with an explanation of the reasons for their decisions. Confidence from medical experts in clinical decisions can increase if they receive from Artificial Intelligence tools interpretable output under the form of, e.g., explanations in natural language or visualized information. This way, the system outcome can be critically assessed by them, and they can evaluate the trustworthiness of the results. In this paper, we propose a new general-purpose method that relies on interpretability ideas. The approach is based on two successive steps, the former being a filtering scheme typically used in Content-Based Image Retrieval, whereas the latter is an evolutionary algorithm able to classify and, at the same time, automatically extract explicit knowledge under the form of a set of IF-THEN rules. This approach is tested on a set of chest X-ray images aiming at assessing the presence of COVID-19.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.