Digitizing whole-slide imaging in digital pathology has led to the advancement of computer-aided tissue examination using machine learning techniques, especially convolutional neural networks. A number of convolutional neural network-based methodologies have been proposed to accurately analyze histopathological images for cancer detection, risk prediction, and cancer subtype classification. Most existing methods have conducted patch-based examinations, due to the extremely large size of histopathological images. However, patches of a small window often do not contain sufficient information or patterns for the tasks of interest. It corresponds that pathologists also examine tissues at various magnification levels, while checking complex morphological patterns in a microscope. We propose a novel multi-task based deep learning model for HIstoPatholOgy (named Deep-Hipo) that takes multi-scale patches simultaneously for accurate histopathological image analysis. Deep-Hipo extracts two patches of the same size in both high and low magnification levels, and captures complex morphological patterns in both large and small receptive fields of a whole-slide image. Deep-Hipo has outperformed the current state-of-the-art deep learning methods. We assessed the proposed method in various types of whole-slide images of the stomach: well-differentiated, moderately-differentiated, and poorly-differentiated adenocarcinoma; poorly cohesive carcinoma, including signet-ring cell features; and normal gastric mucosa. The optimally trained model was also applied to histopathological images of The Cancer Genome Atlas (TCGA), Stomach Adenocarcinoma (TCGA-STAD) and TCGA Colon Adenocarcinoma (TCGA-COAD), which show similar pathological patterns with gastric carcinoma, and the experimental results were clinically verified by a pathologist. The source code of Deep-Hipo is publicly available athttp://dataxlab.org/deep-hipo.
Read full abstract