Abstract

Deep learning methods achieved remarkable results in medical image analysis tasks but it has not yet been widely used by medical professionals. One of the main reasons for this restricted usage is the uncertainty of the reasons that influence the decision of the model. Explainable AI methods have been developed to improve the transparency, interpretability, and explainability of the black-box AI methods. The result of an explainable segmentation method will be more trusted by experts. In this study, we designed an explainable deep correction method by incorporating cascaded 1D and 2D models to refine the output of other models and provide reliable yet accurate results. We implemented a 2-step loop with a 1D local boundary validation model in the first step, and a 2D image patch segmentation model in the second step, to refine incorrect segmented regions slice-by-slice. The proposed method improved the result of the CNN segmentation models and achieved state-of-the-art results on 3D liver segmentation with the average dice coefficient of 98.27 on the Sliver07 dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.