Abstract

Both semi-supervised learning and transfer learning aim at lowering the annotation burden for training models. However, such two tasks are usually studied separately, i.e. most semi-supervised learning algorithms train models from scratch while transfer learning assumes pre-trained models as the initialization. In this work, we focus on a previously-less-concerned setting that further reduces the annotation efforts through incorporating both semi-supervised and transfer learning, where specifically a pre-trained source model is used as the initialization of semi-supervised learning. As those powerful pre-trained models are ubiquitously available nowadays and can considerably benefit various down-streaming tasks, such a setting is relevant to real-world applications yet challenging to design effective algorithms.Aiming at enabling transfer learning under semi-supervised settings, we propose a hierarchical self-regularization mechanism to exploit unlabeled samples more effectively, where a novel self-regularizer has been introduced to incorporate both individual-level and population-level regularization terms. The former term employs self-distillation to regularize learned deep features for each individual sample, and the latter one enforces self-consistency on feature distributions between labeled and unlabeled samples. Samples involved in both regularizers are weighted by an adaptive strategy, where self-regularization effects of both terms are adaptively controlled by the confidence of every sample. To validate our algorithm, exhaustive experiments have been conducted on diverse datasets such as CIFAR-10 for general object recognition, CUB-200-2011/MIT-indoor-67 for fine-grained classification and MURA for medical image classification. Compared with state-of-the-art semi-supervised learning methods including Pseudo Label, Mean Teacher, MixMatch and FixMatch, our algorithm demonstrates two advantages: first of all, the proposed approach adopts a new point of view to tackle problems caused by inadequate supervision and achieves very competitive results; then, it is complementary to these state-of-the-art methods and thus can be combined with them to get additional improvements. Furthermore, our method can also be applied to fully supervised transfer learning and self-supervised learning. We have published our code at https://github.com/SHI-Labs/Semi-Supervised-Transfer-Learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.