Abstract

There is often a lack of explanation when artificial intelligence (AI) is used to diagnose skin lesions, which makes the physician unable to interpret and validate the output; thus, diagnostic systems become significantly less safe. In this paper, we proposed a deep inherent learning method to classify seven types of skin lesions. The proposed deep inherent learning was validated using different explanation techniques. Explainable AI (X-AI) was used to explain decision-making processes at the local and global levels. In addition, we provide visual information to help physicians trust the proposed method. The challenging dataset, HAM10000, was used to evaluate the proposed method. Medical practitioners can better understand the mechanisms of black-box AI models using our simple, stage-based X-AI framework. They can trust the proposed method because the rationale for its decisions is explained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call