Abstract

The main purpose of this chapter is to motivate and analyze the Riemannian trust-region method (RTR). This optimization algorithm shines brightest when it uses both the Riemannian gradient and the Riemannian Hessian. It applies for optimization on manifolds in general, thus for embedded submanifolds of linear spaces in particular. For that setting, the previous chapters introduce the necessary geometric tools. Toward RTR, the chapter first introduces a Riemannian version of Newton's method. It is motivated by first developing second-order optimality conditions. Each iteration of Newton's method requires solving a linear system of equations in a tangent space. To this end, the classical conjugate gradients method (CG) is reviewed. Then, RTR is presented with a worst-case convergence analysis guaranteeing it can find points which approximately satisfy first- and second-order necessary optimality conditions under some assumptions. Subproblems can be solved with a variant of CG called truncated-CG (tCG). The chapter closes with three optional sections: one about local convergence, one providing simpler conditions to ensure convergence, and one about checking Hessians numerically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call