Abstract

For smooth convex optimization problems, the optimal convergence rate of first-order algorithm is [Formula: see text] in theory. This paper proposes three improved accelerated gradient algorithms with the gradient information at the latest point. For the step size, to avoid using the global Lipschitz constant and make the algorithm converge faster, new adaptive line search strategies are adopted. By constructing a descent Lyapunov function, we prove that the proposed algorithms can preserve the convergence rate of [Formula: see text]. Numerical experiments demonstrate that our algorithms perform better than some existing algorithms which have optimal convergence rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call