Abstract

Deep learning methods achieved remarkable results in medical image analysis tasks but it has not yet been widely used by medical professionals. One of the main reasons for this restricted usage is the uncertainty of the reasons that influence the decision of the model. Explainable AI methods have been developed to improve the transparency, interpretability, and explainability of the black-box AI methods. The result of an explainable segmentation method will be more trusted by experts. In this study, we designed an explainable deep correction method by incorporating cascaded 1D and 2D models to refine the output of other models and provide reliable yet accurate results. We implemented a 2-step loop with a 1D local boundary validation model in the first step, and a 2D image patch segmentation model in the second step, to refine incorrect segmented regions slice-by-slice. The proposed method improved the result of the CNN segmentation models and achieved state-of-the-art results on 3D liver segmentation with the average dice coefficient of 98.27 on the Sliver07 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call