Abstract
Considers the problem of loss function minimization when only (possibly noisy) measurements of the loss function are available. In particular, no measurements of the gradient of the loss function are assumed available. The simultaneous perturbation stochastic approximation (SPSA) algorithm displays the classic behavior of first-order search algorithms by typically exhibiting a steep initial decline in the loss function followed by a slow decline to the optimum. This paper presents a second-order SPSA algorithm that is based on estimating both the loss function gradient and inverse Hessian matrix at each iteration. The aim of this approach is to emulate the acceleration properties associated with deterministic algorithms of Newton-Raphson form, particularly in the terminal phase where the first-order SPSA algorithm slows down in its convergence. This second-order SPSA algorithm requires only three loss function measurements at each iteration, independent of the problem dimension. >
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.