Abstract

The increasing integration of deep learning with computational pathology has amplified the use of whole slide images (WSIs) in modern clinical diagnosis. However, direct loading of an entire WSI is often hindered by memory constraints. The traditional strategy of segmenting WSI into image patches for random sampling introduces redundant pathological information and cannot achieve end-to-end training. To address this, we introduce a classification model for WSIs centered on self-learning sampling. First, we introduce a self-learning sampling module for pathological images, embed the selected key patches into the Transformer encoder, and obtain the correlation information between the patches, which can realize end-to-end training of the entire network. We also implement a combined focal and sampling loss function to tackle the unbalanced distribution of pathological image samples and redundancy in sampling. Testing on the TCGA-LUSC dataset and the colon cancer dataset of collaborating hospitals demonstrated our model’s capacity to rival the accuracy and AUC of the advanced TransMIL method. Notably, our model reduced the WSI inference time compared to TransMIL, leading to performance boosts of 15.1% and 22.4% on the respective datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call