Abstract

Background: Deep learning has huge potential to transform healthcare however significant expertise is required to train such models. In this study, we therefore sought to evaluate the use of automated deep learning software to develop medical image diagnostic classifiers by healthcare professionals with limited coding - and no deep learning - expertise. Methods: We used five publicly available open-source datasets: (i) retinal fundus images (MESSIDOR); (ii) optical coherence tomography (OCT) images (Guangzhou Medical University/Shiley Eye Institute, Version 3); (iii) images of skin lesions (Human against Machine (HAM)10000) and (iv) both paediatric and adult chest X-ray (CXR) images (Guangzhou Medical University/Shiley Eye Institute, Version 3 and the National Institute of Health (NIH)14 dataset respectively) to separately feed into a neural architecture search framework that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we performed external validation using the Edinburgh Dermofit Library dataset. Findings: Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (range: sensitivity of 73·3-97·0%, specificity of 67-100% and AUPRC of 0·87-1). In the multiple classification tasks, the diagnostic properties ranged from 38-100% for sensitivity and 67-100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%. The quality of the open-access datasets used in this study (including the lack of information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitation of this study. Interpretation: All models, except for the automated deep learning model trained on the multi-label classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The availability of automated deep learning may become a cornerstone for the democratization of sophisticated algorithmic modelling in healthcare as it allows the derivation of classification models without requiring a deep understanding of the mathematical, statistical and programming principles. Future studies should compare several application programming interfaces on thoroughly curated datasets. Funding Statement: National Institute for Health Research, United Kingdom. PAK is supported by an NIHR Clinician Scientist Award (NIHR-CS--2014-14-023). The research was also supported by the National Institute for Health Research (NIHR) Biomedical Research Centre based at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology. Declaration of Interests: The authors have no conflict of interest to declare. JL and TB are employees of DeepMind Technologies, a subsidiary of Alphabet Inc. RC is an intern at DeepMind. PK is an external consultant for DeepMind. Ethics Approval Statement: Not required.

Highlights

  • Diagnosis depends on data: its collection, integration and interpretation enables accurate classification of clinical presentations into an accepted diagnostic category

  • Task 2: Classification of pneumonia vs normal on paediatric chest X-ray (CXR) The Paediatric CXR set provided by Guangzhou Medical University/Shiley Eye Institute involved 5827 of 5232 patients CXR images (1582 showing normal pediatric chest x-rays, and 4245 showing pneumonia)

  • The area under the precision recall curve (AUPRC) of the automated deep learning model trained to distinguish these four categories was 0.99, while best accuracy was reached at a cut-off value of 0.5, with a sensitivity of 97.3%, a specificity of 100% and a positive predictive value (PPV) of 97.7%

Read more

Summary

Introduction

Diagnosis depends on data: its collection, integration and interpretation enables accurate classification of clinical presentations into an accepted diagnostic category. In artificial intelligence (AI), the technique of deep learning uses artificial neural networks – so-called because of their superficial resemblance to biological neural networks – as a computational model to discover intricate structure and patterns in large, high dimensional datasets such as medical images.[1] A key feature of these networks is their ability to fine-tune based on experience, allowing them to adapt to their inputs, becoming capable of learning. It is this characteristic which makes them powerful tools for pattern recognition, classification, and prediction. Since 2012, deep learning has brought seismic changes to the technology industry, with major breakthroughs in areas as diverse as computer vision, image caption, speech recognition, natural language translation, robotics, and even self-driving cars.[4,5,6,7,8,9] In 2015, Scientific American listed deep learning as one of their “world changing” ideas for the year.[10]

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.