Abstract

Motivated by big data applications, we consider unconstrained stochastic optimization problems. Stochastic quasi-Newton methods have proved successful in addressing such problems. However, in both convex and non-convex regimes, most existing convergence theory requires the gradient mapping of the objective function to be Lipschitz continuous, a requirement that might not hold. To address this gap, we consider problems with not necessarily Lipschitzian gradients. Employing a local smoothing technique, we develop a smoothing stochastic quasi-Newton (S-SQN) method. Our main contributions are three-fold: (i) under suitable assumptions, we show that the sequence generated by the S-SQN scheme converges to the unique optimal solution of the smoothed problem almost surely; (ii) we derive an error bound in terms of the smoothed objective function values; and (iii) to quantify the solution quality, we derive a bound that relates the iterate generated by the S-SQN method to the optimal solution of the original problem.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.