Abstract

Background and objectiveWith the rapid development of data science methods like deep learning, these methods have already been used into the field of healthcare and medicine. However, due to regulations and ethical issues, it is practically difficult to obtain large amount of medical data for training deep learning models. Transfer learning is a powerful tool to reuse the knowledge gained from different domains, which makes it possible to retrain deep learning models only with small datasets in medical image processing. In this contribution, a systematic study of model transfer techniques like fine tuning parts of the network or adding additional layers, for medical image data was conducted. MethodsThe study accomplished a binary classification task based on a colorectal cancer dataset, including microsatellite unstable or hypermutated (MSIMUT) and microsatellite stable (MSS) images. By using K-fold cross-validation, the performances of five pretrained models (DenseNet121, DenseNet201, InceptionV3, MobileNetV2 and ResNet50) were assessed according to balanced accuracy. As baseline methods, combinations of transfer learning as feature extractor and principal component analysis with linear discriminant analysis (PCA-LDA) or support vector machine (PCA-SVM) were utilised, and compared with the transfer learning counterparts. ConclusionsThe results have shown that adding convolutional layers perform obviously better than simply using the original network or fine-tuning some last layers of the network. Furthermore, a proposed bagging method performed well on a testing dataset. This study reduces the workload for future transfer learning tasks in the biomedical domain and allows to test promising transfer learning strategies first.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call