Abstract

The gigapixel resolution of a single whole slide image (WSI), and the lack of huge annotated datasets needed for computational pathology, makes cancer diagnosis and grading with WSIs a challenging task. Moreover, downsampling of WSIs might result in loss of information critical for cancer diagnosis. Motivated by the fact that context such as topological structures in the tumor environment may contain critical information in cancer grading and diagnosis, a novel two-stage learning approach is proposed. Self-supervised learning is applied to improve training through unlabled data and graph convolutional network (GCN) is deployed to incorporate context from tumor and surrounding tissues. More specifically, we represent the whole slide as a graph, where nodes are patches from the WSIs. The patches in the graph are represented as feature vectors obtained from pre-training the patches in self-supervised learning. The graph is trained using GCN which accounts for the context of each tissue for the cancer grading and classification. In this work, WSIs for prostrate cancer are validated and the model performance is evaluated based on diagnosis and grading of prostrate cancer and compared with ResNet50 as a traditional convolutional neural network (CNN) and multi-instance learning (MIL) as a leading approach in WSI diagnosis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call