Stochastic optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning. Despite extensive studies on AUPRC optimization, generalization is still an open problem. In this work, we present the first trial in the algorithm-dependent generalization of stochastic AUPRC optimization. The obstacles to our destination are three-fold. First, according to the consistency analysis, the majority of existing stochastic estimators are biased with biased sampling strategies. To address this issue, we propose a stochastic estimator with sampling-rate-invariant consistency and reduce the consistency error by estimating the full-batch scores with score memory. Second, standard techniques for algorithm-dependent generalization analysis cannot be directly applied to listwise losses. To fill this gap, we extend the model stability from instance-wise losses to listwise losses. Third, AUPRC optimization involves a compositional optimization problem, which brings complicated computations. In this work, we propose to reduce the computational complexity by matrix spectral decomposition. Based on these techniques, we derive the first algorithm-dependent generalization bound for AUPRC optimization. Motivated by theoretical results, we propose a generalization-induced learning framework, which improves the AUPRC generalization by equivalently increasing the batch size and the number of valid training examples. Practically, experiments on image retrieval and long-tailed classification speak to the effectiveness and soundness of our framework.
Read full abstract