Abstract

White blood cells (WBCs) are crucial constituents of the blood that protect the human body against infections and viruses. The classification of WBCs in a blood smear image is used to diagnose a range of haematological disorders. The manual identification of WBCs can result in potential errors, necessitating the need for automated systems that can assist in classification. Recently, various deep learning models, such as DenseNet121, Xception, MobileNetV2, ResNet50, and VGG16, have been used to classify WBCs. However, the available classification models are black boxes because their decisions are difficult for humans to understand without further exploration. The interpretability and explainability of these models are essential, as their decisions can have severe consequences for patients. In this paper, we integrate an explainable AI (XAI) technique called local interpretable model-agnostic explanations (LIME) with the DenseNet121 classification model for WBC classification. Interpretable results would allow the users to understand and verify the model’s predictions, enhancing their confidence in the automated diagnosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call