Abstract

BackgroundIn order to push the frontiers of brain-computer interface (BCI) and neuron-electronics, this research presents a novel framework that combines cutting-edge technologies for improved brain-related diagnostics in smart healthcare. This research offers a ground-breaking application of transparent strategies to BCI, promoting openness and confidence in brain-computer interactions and taking inspiration from Grad-CAM (Gradient-weighted Class Activation Mapping) based Explainable Artificial Intelligence (XAI) methodology. The landscape of healthcare diagnostics is about to be redefined by the integration of various technologies, especially when it comes to illnesses related to the brain. New methodA novel approach has been proposed in this study comprising of Xception architecture which is trained on imagenet database following transfer learning process for extraction of significant features from magnetic resonance imaging dataset acquired from publicly available distinct sources as an input and linear support vector machine has been used for distinguishing distinct classes.Afterwards, gradient-weighted class activation mapping has been deployed as the foundation for explainable artificial intelligence (XAI) for generating informative heatmaps, representing spatial localization of features which were focused to achieve model’s predictions. ResultsThus, the proposed model not only provides accurate outcomes but also provides transparency for the predictions generated by the Xception network to diagnose presence of abnormal tissues and avoids overfitting issues. Hyperparameters along with performance-metrics are also obtained while validating the proposed network on unseen brain MRI scans to ensure effectiveness of the proposed network. Comparison with existing methods and conclusionsThe integration of Grad-CAM based explainable artificial intelligence with deep neural network namely Xception offers a significant impact in diagnosing brain tumor disease while highlighting the specific regions of input brain MRI images responsible for making predictions. In this study, the proposed network results in 98.92% accuracy, 98.15% precision, 99.09% sensitivity, 98.18% specificity and 98.91% dice-coefficient while identifying presence of abnormal tissues in the brain. Thus, Xception model trained on distinct dataset following transfer learning process offers remarkable diagnostic accuracy and linear support vector act as a classifier to provide efficient classification among distinct classes. In addition, the deployed explainable artificial intelligence approach helps in revealing the reasoning behind predictions made by deep neural network having black-box nature and provides a clear perspective to assist medical experts in achieving trustworthiness and transparency while diagnosing brain tumor disease in the smart healthcare

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call