Abstract

The emerging deep learning technologies are leading to a new wave of artificial intelligence, but in some critical applications such as medical image processing, deep learning is inapplicable due to the lack of interpretation, which is essential for a critical application. This work develops an explainable feedforward model with Gaussian kernels, in which the Gaussian mixture model is leveraged to extract representative features. To make the error within the allowable range, we calculate the lower bound of the number of samples through the Chebyshev inequality. In the training processing, we discuss both the deterministic and stochastic feature representations, and investigate the performance of them and the ensemble model. Additionally, we use Shapely additive explanations to analyze the experiment results. The proposed method is interpretable, so it can replace the deep neural network by working with shallow machine learning technologies, such as the Support Vector Machine and Random Forest. We compare our method with baseline methods on Brain Tumor and Mitosis dataset. The experimental results show our method outperforms the RAM (Recurrent Attention Model), VGG19 (Visual Geometry Group 19), LeNET-5, and Explainable Prediction Framework while having strong interpretability.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.