Abstract

Finetuning pre-trained deep neural networks (DNN) delicately designed for large-scale natural images may not be suitable for medical images due to the intrinsic difference between the datasets. We propose a strategy to modify DNNs, which improves their performance on retinal optical coherence tomography (OCT) images. Deep features of pre-trained DNN are high-level features of natural images. These features harm the training of transfer learning. Our strategy is to remove some deep convolutional layers of the state-of-the-art pre-trained networks: GoogLeNet, ResNet and DenseNet. We try to find the optimized deep neural networks on small-scale and large-scale OCT datasets, respectively, in our experiments. Results show that optimized deep neural networks not only reduce computational burden, but also improve classification accuracy.

Highlights

  • The retina in human eyes receives the focused light by the lens, and converts it into neural signals.The main sensory region for this purpose is the macula which is located in the central part of a retina.The macula is responsible for the central, high-resolution, color vision that is possible in good light.The retina processes the information gathered by the macula, and sends it to the brain via the optic nerve for visual recognition.Macular health can be affected by a number of pathologies, including age-related macular degeneration (AMD) and diabetic macular edema (DME)

  • We investigate the performance on ImageNet dataset of some modern deep neural networks (DNN) such as VGGNet, GoogLeNet, ResNet, DenseNet, MobileNet [29,30] and NASNet [31]

  • To assess the sub-network architectures, their diagnostic performance is explored based on two different optical coherence tomography (OCT) datasets: a large-scale dataset and a small-scale dataset

Read more

Summary

Introduction

The retina in human eyes receives the focused light by the lens, and converts it into neural signals. Extensive experiments based on 4 distinct medical-imaging applications from Tajbakhsh et al demonstrate that deeply fine-tuned convolutional neural networks (CNN) are useful for medical image analysis, performing as well as fully trained CNNs and even outperforming the latter when limited training data are available They observed that the required level of finetuning differs from one application to another, which means that the strategy of finetuning still remains an open question [26]. The variation of patterns increases with increasing layer number, indicating that increasingly invariant representations are learned Inspired by these results, we remove the deeper layers of the pre-trained DNNs such that in the process of transfer learning, the DNNs classify OCT images with the help of low-level features of natural images, without the interference of high-level features. We briefly discuss the parameters, architecture and sub-networks of the three modern DNNs (Inception-v3, ResNet and DenseNet121)

Sub-Networks of Inception-v3
Sub-Networks of ResNet50
Sub-Networks of DenseNet121
Experiments and Results
Performance of the Sub-Networks on Large-Scale Dataset
Performance of the Sub-Networks on Small-Scale Dataset
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.