Abstract

To successfully train a deep neural network, a large amount of human-labeled data is required. Unfortunately, in many areas, collecting and labeling data is a difficult and tedious task. Several ways have been developed to mitigate the problem associated with the shortage of data, the most common of which is transfer learning. However, in many cases, the use of transfer learning as the only remedy is insufficient. In this study, we improve deep neural models training and increase the classification accuracy under a scarcity of data by the use of the self-supervised learning technique. Self-supervised learning allows an unlabeled dataset to be used for pretraining the network, as opposed to transfer learning that requires labeled datasets. The pretrained network can be then fine-tuned using the annotated data. Moreover, we investigated the effect of combining the self-supervised learning approach with transfer learning. It is shown that this strategy outperforms network training from scratch or with transfer learning. The tests were conducted on a very important and sensitive application (skin lesion classification), but the presented approach can be applied to a broader family of applications, especially in the medical domain where the scarcity of data is a real problem.

Highlights

  • Deep learning algorithms have achieved a tremendous success in various image processing tasks.Currently, deep learning-based approaches obtain state-of-the-art performance in vision tasks such as image classification [1], object localization and detection [2,3,4], as well as object segmentation [5].It is well-known that any deep learning system requires a lot of annotated data to obtain satisfying results [6]

  • The proposed approach, being the combination of transfer learning with self-supervised learning, has led to significant increase of the accuracy and ROC

  • The network performance was tested using different ways of weight initialization: random initialization, transfer learning, self-supervised learning, and combination of transfer learning with self-supervised learning

Read more

Summary

Introduction

Deep learning algorithms have achieved a tremendous success in various image processing tasks. Transfer learning involves training the network on a huge dataset (e.g., Imagenet) [9] and treating this network as a starting point in the target task training This method can provide a performance improvement. In contrast to transfer learning, self-supervised pretraining does not require a labeled dataset. This is very useful in a variety of tasks in which a large part of available datasets is not annotated (e.g., in medicine) or training dataset is artificially generated without accompanying labels [11]. The use of the self-supervised learning method involves two steps (Figure 1): pretraining of the network with the use of unlabeled data (pretext task) and training on the target task with labeled data (downstream task).

Related Work
Approach
Loss Function
Dataset
Implementation Details
Initial Preprocessing
Jigsaw Pretext Task
Pretext Task Training
Downstream Task Training
Experiments
Method
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call