Abstract

Zero-shot learning (ZSL) is an attractive technique that can recognize novel object classes without any visual examples, but most existing methods assume that the class labels of the training instances from seen classes are accurate and reliable. However, in some real-world scenarios, the label quality may be compromised by various factors, and ZSL may suffer from performance degradation under label noise. In this paper, we propose an effective approach that utilizes a robust loss function to handle label noise in seen classes. Specifically, we construct a new denoising framework to reduce the impact of outliers and mislabeled samples. To mitigate overfitting to noisy labelled samples, a robust loss function called ramp-style loss is employed to filter out the negative influence of data with anomalous loss values, using only samples within a normal loss range for training, thereby enhancing the overall robust performance. Additionally, we introduce the concave-convex procedure (CCCP) method to address the non-convexity issue of ramp-style loss and devise an efficient update scheme based on ADMM. Extensive experiments conducted on several benchmark datasets (AWA2, SUN, CUB) demonstrate the superior performance of our framework over state-of-the-art ZSL methods in various noisy-label environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call