Abstract

Deep neural networks have significantly advanced various fields. However, these models often encounter difficulties in achieving effective generalization when the distribution of test samples varies from that of the training samples. Recently, some fully test-time adaptation methods have been proposed to adapt the trained model with the unlabeled test samples before prediction to enhance the test performance. Despite achieving remarkable results, these methods only involve one trained model, which could only provide certain side information for the test samples. In real-world scenarios, there could be multiple available trained models that are beneficial to the test samples and are complementary to each other. Consequently, to better utilize these trained models, in this paper, we propose the problem of multi-source fully test-time adaptation to adapt multiple trained models to the test samples. To address this problem, we introduce a simple yet effective method utilizing a weighted aggregation scheme and introduce two unsupervised losses. The former could adaptively assign a higher weight to a more relevant model, while the latter could jointly adapt models with online unlabeled samples. Extensive experiments on three image classification datasets show that the proposed method achieves better results than baseline methods, demonstrating the superiority in adapting to multiple models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.