Abstract
Tumor grading and interpretability of laryngeal cancer is a key yet challenging task in the clinical diagnosis, mainly because of the commonly used low-magnification pathological images lack fine cellular structure information and accurate localization, the diagnosis results of pathologists are different from those of attentional convolutional network -based methods, and the gradient-weighted class activation mapping method cannot be optimized to create the best visualization map. To address this problem, we propose an end-to-end depth domain adaptive network (DDANet) with integration gradient CAM and priori experience-guided attention to improve the tumor grading performance and interpretability by introducing the pathologist's a priori experience in high-magnification into the depth model. Specifically, a novel priori experience-guided attention (PE-GA) method is developed to solve the traditional unsupervised attention optimization problem. Besides, a novel integration gradient CAM is proposed to mitigate overfitting, information redundancies and low sparsity of the Grad-CAM graphs generated by the PE-GA method. Furthermore, we establish a set of quantitative evaluation metric systems for model visual interpretation. Extensive experimental results show that compared with the state-of-the-art methods, the average grading accuracy is increased to 88.43% (↑4.04%), the effective interpretable rate is increased to 52.73% (↑11.45%). Additionally, it effectively reduces the difference between CV-based method and pathology in diagnosis results. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and our model outperforms pathologists with different levels of experience.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.