Abstract

Many machine learning tasks such as structured sparse coding and multi-task learning can be converted into an equality constrained optimization problem. The stochastic alternating direction method of multipliers (SADMM) is a popular algorithm to solve such large-scale problems, and has been successfully used in many real-world applications. However, existing SADMMs fail to take into consideration an important issue in their designs, i.e., protecting sensitive information. To address this challenging issue, this paper proposes a novel differential privacy stochastic ADMM framework for solving equality constrained machine learning problems. In particular, to further lift the utility in privacy-preserving equality constrained optimization, a Laplacian smoothing operation is also introduced into our differential privacy ADMM framework, and it can smooth out the Gaussian noise used in the Gaussian mechanism. Then we propose an efficient differentially private variance reduced stochastic ADMM (DP-VRADMM) algorithm with Laplacian smoothing for both strongly convex and general convex objectives. As a by-product, we also present a new differentially private stochastic ADMM algorithm with DP guarantees. In theory, we provide both private guarantees and utility guarantees for the proposed algorithms, which show that Laplacian smoothing can improve the utility bounds of our algorithms. Experimental results on real-world datasets verify our theoretical results and the effectiveness of our algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call