Abstract

Colorectal cancer (CRC) is the third most common and second most lethal cancer globally. It is highly heterogeneous with different clinical-pathological characteristics, prognostic status, and therapy responses. Thus, the precise diagnosis of CRC subtypes is of great significance for improving the prognosis and survival of CRC patients. Nowadays, the most commonly used molecular-level CRC classification system is the Consensus Molecular Subtypes (CMSs). In this study, we applied a weakly supervised deep learning method, named attention-based multi-instance learning (MIL), on formalin-fixed paraffin-embedded (FFPE) whole-slide images (WSIs) to distinguish CMS1 subtype from CMS2, CMS3, and CMS4 subtypes, as well as distinguish CMS4 from CMS1, CMS2, and CMS3 subtypes. The advantage of MIL is training a bag of the tiled instance with bag-level labels only. Our experiment was performed on 1218 WSIs obtained from The Cancer Genome Atlas (TCGA). We constructed three convolutional neural network-based structures for model training and evaluated the ability of the max-pooling operator and mean-pooling operator on aggregating bag-level scores. The results showed that the 3-layer model achieved the best performance in both comparison groups. When compared CMS1 with CMS234, max-pooling reached the ACC of 83.86 % and the mean-pooling operator reached the AUC of 0.731. While comparing CMS4 with CMS123, mean-pooling reached the ACC of 74.26 % and max-pooling reached the AUC of 0.609. Our results implied that WSIs could be utilized to classify CMSs, and manual pixel-level annotation is not a necessity for computational pathology imaging analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call