Abstract

In stochastic optimization problems where only noisy zeroth-order (ZO) oracles are available, the Kiefer-Wolfowitz algorithm and its randomized counterparts are widely used as gradient estimators. Existing algorithms generate the random perturbations from certain distributions with a zero mean and an isotropic (either identity or scalar) covariance matrix. In contrast, this work considers the generalization where the perturbations may have an anisotropic covariance based on the ZO oracle history. We propose to feed the second-order approximation into the covariance matrix of the random perturbation, so it is dubbed as Hessian-aided random perturbation (HARP). HARP collects two or more (depending on the specific estimator form) ZO oracle calls per iteration to construct the gradient and the Hessian estimators. We prove HARP's almost-surely convergence and derive its convergence rate under standard assumptions. We demonstrate, with theoretical guarantees and numerical experiments, that HARP is less sensitive to ill-conditioning and more query-efficient than other gradient approximation schemes whose random perturbations have an isotropic covariance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call