Abstract

The deep learning (DL)-based approaches in tumor pathology help to overcome the limitations of subjective visual examination from pathologists and improve diagnostic accuracy and objectivity. However, it is unclear how a DL system trained to discriminate normal/tumor tissues in a specific cancer could perform on other tumor types. Herein, we cross-validated the DL-based normal/tumor classifiers separately trained on the tissue slides of cancers from bladder, lung, colon and rectum, stomach, bile duct, and liver. Furthermore, we compared the differences between the classifiers trained on the frozen or formalin-fixed paraffin-embedded (FFPE) tissues. The Area under the curve (AUC) for the receiver operating characteristic (ROC) curve ranged from 0.982 to 0.999 when the tissues were analyzed by the classifiers trained on the same tissue preparation modalities and cancer types. However, the AUCs could drop to 0.476 and 0.439 when the classifiers trained for different tissue modalities and cancer types were applied. Overall, the optimal performance could be achieved only when the tissue slides were analyzed by the classifiers trained on the same preparation modalities and cancer types.

Highlights

  • Visual examination of hematoxylin-eosin (H&E)-stained tissue slides by pathologists has been the foundation of cancer diagnosis for different types of cancers [1].it is well known that visual assessment of tissue slide is often subjective and intra- and inter-observer variabilities are unavoidable [2,3]

  • Normal/tumor classifiers for the six different cancer types were trained with the training datasets of the The Cancer Genome Atlas (TCGA) frozen and formalin-fixed paraffin-embedded (FFPE) whole slide images (WSIs) (Figure 1a)

  • In the first two graphs of panel b, the receiver operating characteristic (ROC) curves for the patch-level classification results on all the frozen tissue image patches in the test datasets by the classifiers trained on the frozen and FFPE tissues were presented

Read more

Summary

Introduction

Visual examination of hematoxylin-eosin (H&E)-stained tissue slides by pathologists has been the foundation of cancer diagnosis for different types of cancers [1].it is well known that visual assessment of tissue slide is often subjective and intra- and inter-observer variabilities are unavoidable [2,3]. Visual examination of hematoxylin-eosin (H&E)-stained tissue slides by pathologists has been the foundation of cancer diagnosis for different types of cancers [1]. Considering predicted shortage of pathologists in the near future and increased workload for the analysis of various molecular tests, automation on histopathologic diagnosis will be necessary to optimize workload in many pathology laboratories [5,6]. Massive amounts of digital pathology images are available for researchers who have been interested in the automation of diagnosis for cancer tissues. The performance of computer-based image analysis has been much improved by the adoption of deep learning technology [9]. Combined with huge digital WSI datasets, deep learning has been rapidly adopted for the pathologic diagnosis tasks such as Gleason grading of

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.