Abstract

In this paper, we consider an exceptional study of differentially private stochastic gradient descent (SGD) algorithms in the stochastic convex optimization (SCO). The majority of the existing literature requires that the losses have additional assumptions, such as the loss functions with Lipschitz, smooth and strongly convex, and uniformly bounded of the model parameters, or focus on the Euclidean (i.e. l 2 d ) setting. However, these restrictive requirements exclude many popular losses, including the absolute loss and the hinge loss. By loosening the restrictions, we proposed two differentially private SGD without shuffle model and with shuffle model algorithms (in short, DP-SGD-NOS and DP-SGD-S) for the α , L -Hölder smooth loss by adding calibrated Laplace noise under no shuffling scheme and shuffling scheme in the l p d -setting for p ∈ 1,2 . We provide privacy guarantees by using advanced composition and privacy amplification techniques. We also analyze the convergence bounds of the DP-SGD-NOS and DP-SGD-S and obtain the optimal excess population risks O 1 / n + d log 1 / δ / n ϵ and O 1 / n + d log 1 / δ log n / δ / n 4 + α / 2 1 + α ϵ up to logarithmic factors with gradient complexity O n 2 − α / 1 + α + n . It turns out that the optimal utility bound with the shuffle model is superior to the bound without the shuffle model, which is consistent with the previous work. In addition, the DP-SGD-S achieves the optimal utility bound with the O n gradient computations of linearity for α ≥ 1 / 2 . There is a significant tradeoff between α , L -Hölder smooth losses and gradient complexity for differential privacy of SGD without shuffle model and with shuffle model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call