Abstract

Machine learning has become an increasingly ubiquitous technology, as big data continues to inform and influence everyday life and decision-making. Currently, in medicine and healthcare, as well as in most other industries, the two most prevalent machine learning paradigms are supervised learning and transfer learning. Both practices rely on large-scale, manually annotated datasets to train increasingly complex models. However, the requirement of data to be manually labeled leaves an excess of unused, unlabeled data available in both public and private data repositories. Self-supervised learning (SSL) is a growing area of machine learning that can take advantage of unlabeled data. Contrary to other machine learning paradigms, SSL algorithms create artificial supervisory signals from unlabeled data and pretrain algorithms on these signals. The aim of this review is two-fold: firstly, we provide a formal definition of SSL, divide SSL algorithms into their four unique subsets, and review the state of the art published in each of those subsets between the years of 2014 and 2020. Second, this work surveys recent SSL algorithms published in healthcare, in order to provide medical experts with a clearer picture of how they can integrate SSL into their research, with the objective of leveraging unlabeled data.

Highlights

  • Computer vision (CV) is an interdisciplinary subfield of artificial intelligence dealing with the design of algorithms that allow computers to gain a high-level, semantic understanding of images and videos

  • Label creation for supervised learning is time-consuming and costly for large datasets, and this problem is compounded in domains such as digital pathology and laboratory medicine where the manual annotation process often suffers from high inter- and intra-observer variability [9]

  • It is clear that analytical medicine and self-supervised learning are a natural pairing, as the strengths of self-supervised learning address many of the weaknesses that currently exist in machine learning in medicine

Read more

Summary

Introduction

Computer vision (CV) is an interdisciplinary subfield of artificial intelligence dealing with the design of algorithms that allow computers to gain a high-level, semantic understanding of images and videos. A landmark moment came when AlexNet [5] took first place in the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [6], demonstrating for the first time a CNN-based method outperforming traditional computer vision algorithms in head-to-head competition. The learned semantic distributions are heavily dependent on the training datasets, meaning that their performance and generalizability are typically upper-bounded by dataset size. This is especially true within the domain of object recognition [8]. Label creation for supervised learning is time-consuming and costly for large datasets, and this problem is compounded in domains such as digital pathology and laboratory medicine where the manual annotation process often suffers from high inter- and intra-observer variability [9]

Objectives
Methods
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.