Abstract

Despite the recent increasing interest in digital pathology, efficient analysis of Whole Slide Images (WSIs) faces many challenges that hinder performance, such as their massive size, high heterogeneity, and imbalance distribution. This paper proposes a Joint Multi-decoder Dual-attention U-Net (JMDU-Net) framework for tumor segmentation in WSIs to address these issues. JMDU-Net is an end-to-end framework that consists of a single encoder with a main and four supportive decoder branches. JMDU-Net incorporates the design of multiple novel blocks that adaptively acquire diversified contextual channel-wise features and progressively utilize knowledge from multi-perspective hierarchical feature fusion. Moreover, a multi-magnification majority-voting ensemble is proposed to jointly consider the multi-scale representations of different magnifications in the WSIs. The experimental results and statistical tests performed on the public PAIP 2019 dataset demonstrate the efficiency of JMDU-Net by obtaining an average of 88.0% dice score with 77.4% clipped Jaccard index value for 5-fold cross-validation on training data, while the proposed ensemble succeeded in obtaining a 86.4% dice score with 79.8% clipped Jaccard index value for the official validation set. Furthermore, the generalizability is assessed against DigestPath 2019 and PAIP 2023 datasets. JMDU-Net achieved average dice of 84.9% and 75.3% and Jaccard values of 74.4% and 63.0% on these datasets, respectively. The main aim of JMDU-Net is to provide a complementary assessment to medical professionals to enhance the level of care introduced to the patients by harnessing the powerfulness of deep learning in the cancer diagnosis task. The code will be available publicly at https://github.com/Heba-AbdeNabi/JMDU-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call