Abstract

Visual representation extraction is a fundamental problem in the field of computational histopathology. Considering the powerful representation capacity of deep learning and the scarcity of annotations, self-supervised learning has emerged as a promising approach to extract effective visual representations from unlabeled histopathological images. Although a few self-supervised learning methods have been specifically proposed for histopathological images, most of them suffer from certain defects that may hurt the versatility or representation capacity. In this work, we propose CS-CO, a hybrid self-supervised visual representation learning method tailored for H&E-stained histopathological images, which integrates advantages of both generative and discriminative approaches. The proposed method consists of two self-supervised learning stages: cross-stain prediction (CS) and contrastive learning (CO). In addition, a novel data augmentation approach named stain vector perturbation is specifically proposed to facilitate contrastive learning. Our CS-CO makes good use of domain-specific knowledge and requires no side information, which means good rationality and versatility. We evaluate and analyze the proposed CS-CO on three H&E-stained histopathological image datasets with downstream tasks of patch-level tissue classification and slide-level cancer prognosis and subtyping. Experimental results demonstrate the effectiveness and robustness of the proposed CS-CO on common computational histopathology tasks. Furthermore, we also conduct ablation studies and prove that cross-staining prediction and contrastive learning in our CS-CO can complement and enhance each other. Our code is made available at https://github.com/easonyang1996/CS-CO.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.