Abstract

Black-Box Quantile Optimization via Finite-Difference-Based Gradient Approximation Risk management necessitates consideration of metrics such as quantiles to supplement conventional mean performance measures. In “Quantile Optimization via Multiple-Timescale Local Search for Black-Box Functions,” J. Hu, M. Song, and M. C. Fu consider the problem where the goal is to optimize the quantile of a black-box output. They introduce two new iterative multitimescale stochastic approximation algorithms utilizing finite-difference-based gradient estimators. The first algorithm requires 2d + 1 samples of the black-box function per iteration, where d is the number of decision variables. The second employs a simultaneous-perturbation-based gradient estimator that uses only three samples per iteration, irrespective of the number of decision variables. The authors prove strong local convergence for both algorithms and analyze their finite-time convergence rates through a novel fixed-point argument. These algorithms perform well across a varied set of benchmark problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call