Abstract

Graph convolutional neural networks have shown significant potential in natural and histopathology images. However, their use has only been studied in a single magnification or multi-magnification with either homogeneous graphs or only different node types. In order to leverage the multi-magnification information and improve message passing with graph convolutional networks, we handle different embedding spaces at each magnification by introducing the Multi-Scale Relational Graph Convolutional Network (MS-RGCN) as a multiple instance learning method. We model histopathology image patches and their relation with neighboring patches and patches at other scales (i.e., magnifications) as a graph. We define separate message-passing neural networks based on node and edge types to pass the information between different magnification embedding spaces. We experiment on prostate cancer histopathology images to predict the grade groups based on the extracted features from patches. We also compare our MS-RGCN with multiple state-of-the-art methods with evaluations on several source and held-out datasets. Our method outperforms the state-of-the-art on all of the datasets and image types consisting of tissue microarrays, whole-mount slide regions, and whole-slide images. Through an ablation study, we test and show the value of the pertinent design features of the MS-RGCN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call