Abstract

In this paper, we consider a strongly convex stochastic optimization problem and propose three classes of variable sample-size stochastic first-order methods: (i) the standard stochastic gradient descent method, (ii) its accelerated variant, and (iii) the stochastic heavy-ball method. In each scheme, the exact gradients are approximated by averaging across an increasing batch size of sampled gradients. We prove that when the sample size increases at a geometric rate, the generated estimates converge in mean to the optimal solution at an analogous geometric rate for schemes (i)–(iii). Based on this result, we provide central limit statements, whereby it is shown that the rescaled estimation errors converge in distribution to a normal distribution with the associated covariance matrix dependent on the Hessian matrix, the covariance of the gradient noise, and the step length. If the sample size increases at a polynomial rate, we show that the estimation errors decay at a corresponding polynomial rate and establish the associated central limit theorems (CLTs). Under certain conditions, we discuss how both the algorithms and the associated limit theorems may be extended to constrained and nonsmooth regimes. Finally, we provide an avenue to construct confidence regions for the optimal solution based on the established CLTs and test the theoretical findings on a stochastic parameter estimation problem. Funding: This work was supported by the Office of Naval Research [Grant N00014-22-1-2589] and Basic Energy Sciences [Grant DE-SC0023303]. The work of J. Lei was partially supported by the National Natural Science Foundation of China [Grants 72271187 and 62088101]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/moor.2021.0068 .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.