Pairwise learning is a vital part of machine learning. It depends on pairs of training instances, and is naturally fit for modeling relationships between samples. However, as a data driven paradigm, it faces huge privacy issues. Differential privacy (DP) is a useful tool to protect the privacy of machine learning, but corresponding excess population risk bounds are loose in existing DP pairwise learning analysis. In this paper, we propose a gradient perturbation algorithm for pairwise learning to get better risk bounds under Polyak–Łojasiewicz condition, including both convex and non-convex cases. Specifically, for the theoretical risk bound in expectation, previous best results are of rates O(pn2ϵ2+1n) and O(pnϵ+1n) under strongly convex condition and convex conditions, respectively. In this paper, we use the on-average stability and achieve an O(min{pn1.5ϵ+p1.5n2.5ϵ3,pn2ϵ2+1n}) bound, significantly improving previous bounds. For the high probability risk bound, previous best results are analyzed by the uniform stability, and O(βnU+pnϵ) excess population risk bounds are achieved under strongly convex or convex conditions, where βnU is the traditional pairwise uniform stability parameter, it is large since it considers the worst case of the loss sensitivity. In this paper, we propose the pairwise locally elastic stability and improve the high probability bound to O(βEn+pnϵ), in which the pairwise locally elastic stability parameter βE≪βnU because it considers the average sensitivity of the pairwise loss function.
Read full abstract