Abstract
Automatic classifier accuracy evaluation (ACAEval) on unlabeled test sets is critical for unseen real-world environments. The use of dataset-level regression on synthesized meta-datasets (comprised of many sample sets) has shown promising results for ACAEval. However, the existing meta-dataset for ACAEval is created using simple image transformations such as rotation and background substitution, which can make it difficult to ensure a reasonable distribution shift between the sample set and the test set. When the distribution shift is large, it becomes challenging to estimate the classifier accuracy on the test set using those sample sets. To ensure more robust ACAEval, this paper attempts to customize a meta-dataset in which each sample set has a reasonable distribution shift to the test set. An intra-class cycle-consistent adversarial learning (ICAL) method is introduced to transfer the style of a labeled training set to the style of the test set, by jointly considering the domain shift issue, the label flipping issue (the semantic information may be changed after style transformation), and the diversity of multiple sample sets in the meta-dataset. Experiments validate that under the same experimental setup, our method outperforms the existing ACAEval methods by a good margin, and achieves state-of-the-art performance on several standard benchmark datasets, including digit classification and natural image classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.