Abstract

Among the current generation researcher, artificial intelligence has played vital role in various fields, including healthcare. One of the key areas where it has shown enormous potential is in cancer detection and treatment. AI and methods of machine learning algorithms have been applied to analyze large datasets, such as genomics, transcriptomic, and imaging data, to identify patterns and relationships that can help in cancer diagnosis and therapy. However, due to the inherent complexity and heterogeneity of tumors in individual patients, building a diagnostic and therapeutic platform that can accurately analyze outputs becomes a challenging task. To address this challenge, researchers have proposed the use of explainable AI frameworks in cancer detection. Explainable AI frameworks aim to provide transparency and comprehensibility to the decision-making process of AI algorithms, ensuring that the predictions or classifications generated by these algorithms can be understood and trusted by healthcare professionals. One popular explainable AI method is SHAP (SHapley Additive explanations). SHAP is a well-known XAI method that provides intuitive and interpretable feature importance [13] for individual predictions. Another explainable AI method is LIME (Local Interpretable Model-agnostic Explanations), which generates posthoc explanations and is suitable for quick and satisfactory explanations. These existing explainable AI methods, however, have limitations in their applicability to cancer detection. Therefore, in this research article, we propose the use of two novel frameworks: Neutrosophic Meta SHAP and Neutrosophic Meta Lime. Neutrosophic Meta SHAP and Neutrosophic Meta Lime are efficient frameworks specifically designed for the analysis and interpretation of AI models in oral cancer detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call