Abstract

Annotating histopathological images is generally a time-consuming and labor-intensive process, which requires broad-certificated pathologists carefully examining large-scale and multi-modal images pixel by pixel. Recent frontiers of transfer learning techniques have been widely investigated for image understanding tasks with limited annotations. However, when applied for the grading and recognition of histopathological images, few of them can effectively avoid the performance degradation caused by the domain discrepancy with diverse tissues/organs appearance and imaging devices. To this end, we present a novel method for the unsupervised domain adaptation(UDA) in histopathological image analysis, based on a backbone neural network with graph neural layers for propagating the supervision signals of images with labels. The graph model is first set up by connecting every image with its close neighbors in the embedded feature space. Then graph neural network is employed to synthesize new feature representation from each image. During the training stage, target samples with confident inferences are dynamically allocated with pseudo labels. Accordingly, the cross-entropy loss is used to constrain the predictions of source samples with manually marked labels and target samples with pseudo labels. Furthermore, the maximum mean diversity is adopted to facilitate the extraction of domain-invariant feature representations, where contrastive learning is exploited to enhance the category discrimination for learned features. Our proposed method is evaluated on multiple histopathological image datasets, showing promising classification performance in comparison with other domain adaptation state-of-the-arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call