Abstract

Abstract Background and Aims The histopathological classification for ANCA-associated glomerulonephritis (ANCA-GN) is a well-established tool to reflect the variety of patterns and severity of lesions that can occur in renal biopsies of patients with ANCA-associated vasculitis. As with many fields, medicine saw a rapid emergence of Artificial Intelligence (AI) and Deep learning (DL) approaches. In the field of digital pathology AI can now serve as decision-support for pathologists, with the potential for gains in productivity and time-saving. It was demonstrated previously that AI can aid in identifying histopathological classes of renal diseases, e.g. of diabetic nephropathy. Although these models reach high prediction accuracies, their black box structure makes them very non-transparent. The disadvantage is that the networks’ decisions are not easily interpretable by humans and it is not clear what information in the input data underlies their decisions. This necessitates the use of Explainable AI (XAI), so that decisions made by AI models become accessible for validation by a human expert. Method Renal biopsy slides of 80 patients with ANCA-GN from 3 European centers, who underwent a diagnostic renal biopsy between 1991 and 2011, were included. On the scanned slides glomeruli were labelled as ‘normal’, ‘sclerotic’, ‘crescentic’ or ‘abnormal - other’. We developed a DL-based computational pipeline, which detects and classifies the glomeruli. We investigated the explainability of our model, using XAI techniques to shed light on the decision-making criteria of our trained DL classifier using saliency maps. These maps were analyzed by pathologists to compare the decision-making criteria of humans and the DL model. Results Our DL model shows a prediction accuracy of 93% for classifying glomeruli. The saliency maps from our trained DL models help us to better understand the decision-making criteria of the DL black box. Conclusion AI and DL play an increasingly important role in (nephro)pathology. To ultimately enable safe implementation of these models in clinical practice, validation of their decisions is needed. To achieve this, we used XAI techniques, which showed great potential for illuminating the decision-making criteria of the DL black box.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.