Source-Free Continual Adaptive Learning With Limited Labels on Evolving Data Drifts

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

In real-world, neural network models should be capable of adapting to evolving distributional shifts without catastrophic forgetting to remain trustworthy and robust. Having access to the source data on which the model was previously trained is one of the major challenges with respect to data privacy for model adaptation. We propose a source-free and parameter-efficient continual adaptive learning method for adapting to evolving data shifts with limited labels. We evaluate the method on large-scale image classification and semantic segmentation tasks using fifteen data shift types that are encountered incrementally in the continually evolving data drift settings. Extensive experiments demonstrate the proposed method achieves state-of-the-art model adaptation performance to continual data shifts, outperforming existing continual learning and domain adaptation methods.

Save Icon
Up Arrow
Open/Close