This paper presents general theoretical studies on asymptotic convergence rate (ACR) for finite dimensional optimization. Given the continuous problem function and discrete time stochastic optimization process, the ACR is the optimal constant for control of the asymptotic behaviour of the expected approximation errors. Under general assumptions, condition ACR<1\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$<1$$\\end{document} implies the linear behaviour of the expected time of hitting the ε\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\\varepsilon $$\\end{document}- optimal sublevel set with ε→0+\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$$\\varepsilon \\rightarrow 0^+ $$\\end{document} and determines the upper bound for the convergence rate of the trajectories of the process. This paper provides general characterization of ACR and, in particular, shows that some algorithms cannot converge linearly fast for any nontrivial continuous optimization problem. The relation between asymptotic convergence rate in the objective space and asymptotic convergence rate in the search space is provided. Examples and numerical simulations with use of a (1+1) self-adaptive evolution strategy and other algorithms are presented.
Read full abstract