Abstract

Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.

Highlights

  • Published: 16 November 2021Artificial intelligence (AI) developments have created opportunities for human life in a wide range of industries, business, education, and healthcare [1,2]

  • Explainable models estimate the importance of each feature on the model predictions, providing interpretable tools for understanding deep learning outcomes [3]

  • This study evaluates the high-level features of deep convolutional neural networks, predicting tumor lesion locations in the brain

Read more

Summary

Introduction

Published: 16 November 2021Artificial intelligence (AI) developments have created opportunities for human life in a wide range of industries, business, education, and healthcare [1,2]. As a part of AI, deep-learning-derived approaches provide convenient autonomous image classification in the medical domain [1]. Traditional modeling techniques such as linear regression and decision trees provide an understandable relationship between input data and the decisions in the model outputs [2]. These models are often called white-box models but are usually not as performant as black-box models such as convolutional neural networks (CNN), complicated ensembles, and other deep learning models. The data used for model training can be entangled in their own set of biases

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call