Abstract

Due to the black box model nature of convolutional neural networks, computer-aided diagnosis methods based on depth learning are usually poorly interpretable. Therefore, the diagnosis results obtained by these unexplained methods are difficult to gain the trust of patients and doctors, which limits their application in the medical field. To solve this problem, an interpretable depth learning image segmentation framework is proposed in this paper for processing brain tumor magnetic resonance images. A gradient-based class activation mapping method is introduced into the segmentation model based on pyramid structure to visually explain it. The pyramid structure constructs global context information with features after multiple pooling layers to improve image segmentation performance. Therefore, class activation mapping is used to visualize the features concerned by each layer of pyramid structure and realize the interpretation of PSPNet. After training and testing the model on the public dataset BraTS2018, several sets of visualization results were obtained. By analyzing these visualization results, the effectiveness of pyramid structure in brain tumor segmentation task is proved, and some improvements are made to the structure of pyramid model based on the shortcomings of the model shown in the visualization results. In summary, the interpretable brain tumor image segmentation method proposed in this paper can well explain the role of pyramid structure in brain tumor image segmentation, which provides a certain idea for the application of interpretable method in brain tumor segmentation and has certain practical value for the evaluation and optimization of brain tumor segmentation model.

Highlights

  • The results of brain tumor image segmentation can clearly show the category, location, and volume of lesion areas [1, 2]

  • Some image segmentation frameworks [7,8,9] based on convolutional neural networks (CNN) have shown very high performance in various computer-aided diagnosis (CAD) systems [10,11,12,13,14], the complexity of these machine learning models is often greatly increased in order to improve the accuracy

  • This complex feature extraction method improves the accuracy of the model and leads to the inability to know the relationship between the prediction results generated by the model and all the features extracted by CNN, making CNN a black box model that is difficult for human

Read more

Summary

Introduction

The results of brain tumor image segmentation can clearly show the category, location, and volume of lesion areas [1, 2]. Complex models represented by CNN can extract image features through multilevel abstract reasoning to deal with the very complex relationship between dependent variables and independent variables, which can achieve very high accuracy This complex feature extraction method improves the accuracy of the model and leads to the inability to know the relationship between the prediction results generated by the model and all the features extracted by CNN, making CNN a black box model that is difficult for human.

Methods
Experiments
Experiments on Interpretation
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.