Abstract

We consider the structured stochastic convex program requiring the minimization of $\mathbb{E}[\tilde f(x,\xi)]+\mathbb{E}[\tilde g(y,\xi)]$ subject to the constraint $Ax + By = b$. Motivated by the need for decentralized schemes and structure, we propose a stochastic inexact ADMM (SI-ADMM) framework where subproblems are solved inexactly via stochastic approximation schemes. Based on this framework, we prove the following: (i) under suitable assumptions on the associated batch-size of samples utilized at each iteration, the SI-ADMM scheme produces a sequence that converges to the unique solution almost surely; (ii) If the number of gradient steps (or equivalently, the number of sampled gradients) utilized for solving the subproblems in each iteration increases at a geometric rate, the mean-squared error diminishes to zero at a prescribed geometric rate; (iii) The overall iteration complexity in terms of gradient steps (or equivalently samples) is found to be consistent with the canonical level of $\mathcal{O}(1/\epsilon)$. Preliminary applications on LASSO and distributed regression suggest that the scheme performs well compared to its competitors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call