Abstract

Image-based machine learning and deep learning in particular has recently shown expert-level accuracy in medical image classification. In this study, we combine convolutional and recurrent architectures to train a deep network to predict colorectal cancer outcome based on images of tumour tissue samples. The novelty of our approach is that we directly predict patient outcome, without any intermediate tissue classification. We evaluate a set of digitized haematoxylin-eosin-stained tumour tissue microarray (TMA) samples from 420 colorectal cancer patients with clinicopathological and outcome data available. The results show that deep learning-based outcome prediction with only small tissue areas as input outperforms (hazard ratio 2.3; CI 95% 1.79–3.03; AUC 0.69) visual histological assessment performed by human experts on both TMA spot (HR 1.67; CI 95% 1.28–2.19; AUC 0.58) and whole-slide level (HR 1.65; CI 95% 1.30–2.15; AUC 0.57) in the stratification into low- and high-risk patients. Our results suggest that state-of-the-art deep learning techniques can extract more prognostic information from the tissue morphology of colorectal cancer than an experienced human observer.

Highlights

  • Reincarnation of artificial neural networks in the form of deep learning[1,2,3] has improved the accuracy of several pattern recognition tasks, such as classification of objects, scenes and various other entities in digital images

  • We obtained images of haematoxylin and eosin (H&E) stained tissue microarray (TMA) spots from 420 patients diagnosed with CRC33

  • Reusing neural network trained on one domain for similar purposes in other domains is known as transfer learning[36,37]

Read more

Summary

Introduction

Reincarnation of artificial neural networks in the form of deep learning[1,2,3] has improved the accuracy of several pattern recognition tasks, such as classification of objects, scenes and various other entities in digital images. Tissue images typically comprise a combination of a complex set of patterns and conventional design of an automated tissue classifier requires substantial domain expertise to plan which particular features to extract and feed into a classification algorithm. This task, known as feature engineering, is often laborious and time-consuming. Mapping image data directly to patient survival information through a trained deep learning model should decrease variability and errors introduced by more subjective labelling of tissue entities

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call