Abstract

Convolutional Neural Networks (CNNs) have gained remarkable success in computer vision, which is mostly owe to their ability that enables learning rich image representations from large-scale annotated data. In the field of medical image analysis, large amounts of annotated data may be not always available. The number of acquired ground-truth data is sometimes insufficient to train the CNNs without overfitting and convergence issues from scratch. Hence application of the deep CNNs is a challenge in medical imaging domain. However, transfer learning techniques are shown to provide solutions for this challenge. In this paper, our target task is to implement diabetic retinopathy fundus image classification using CNNs based transfer learning. Experiments are performed on 1014 and 1200 fundus images from two publicly available DR1 and MESSIDOR datasets. In order to complete the target task, we carry out experiments using three different methods: 1) fine-tuning all network layers of each of different pre-trained CNN models; 2) fine-tuning a pre-trained CNN model in a layer-wise manner; 3) using pre-trained CNN models to extract features from fundus images, and then training support vector machines using these features. Experimental results show that convolutional neural networks based transfer learning can achieve better classification results in our task with small datasets (target domain), by taking advantage of knowledge learned from other related tasks with larger datasets (source domain). Transfer learning is a promising technique that promotes the use of deep CNNs in medical field with limited amounts of data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call