Abstract
Diabetes is the most common long-term condition that affects people of all ages due to inadequate insulin production. The appearance of black spots for millennia, interpreting eye pictures, and detecting diabetic retinopathy in its early stages has been a big challenge. The Explainable AI method is to explain the deep learning model that is understandable by humans and trusts the result. This is especially important in safety critical domains like healthcare or security, which replaces manual processes and understanding of the models function by non-technical domain skilled person. Explainable AI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explains ability also helps an organization adopt a responsible approach to AI development. Specifically, the back propagation step is responsible for updating the weights based on its error function. SHAP or SHAPley Additive explanations are a visualization tool that can be used for making a machine learning model more explainable by visualizing its output. It can be used for explaining the prediction of any model by computing the contribution of each feature to the prediction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Innovative Research in Advanced Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.