Abstract

Recently, Test Time Adaptation (TTA) has emerged as a solution to real-world challenges posed by inconsistencies between training and testing distributions. This technique refines a pretrained model for a target domain using only unlabeled test data. However, most existing TTA methods assign equal weights to all samples and are dominated by entropy minimization for adaptation. In this paper, we propose a novel confidence-based optimization strategy and theoretically show that the proposed method tends to yield larger gradients than entropy-based methods and has the potential to enhance performance. Moreover, we show that the importance of samples is frequently underestimated and propose a novel truncation function that assigns an adaptive weight to each sample. The proposed method, named CSTTA, consists of a novel confidence-based optimization strategy and sample-reweighted strategy. It aims to better utilize the sample information for quicker adaptation to new scenarios. Extensive experiments on three digital datasets (CIFAR10-C, CIFAR100-C and ImageNet-C) and a real-world dataset (ImageNet-3DCC) demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call