Abstract

Convexity is one of the most fruitful concepts in classical optimization. Geodesic convexity generalizes that concept to optimization on Riemannian manifolds. There are several ways to carry out such a generalization: This chapter favors permissive definitions which are sufficient to retain the most important properties for optimization purposes (e.g., local optima are global optima). Alternative definitions are discussed, highlighting the fact that all coincide for the special case of Hadamard manifolds (essentially, negatively curved Riemannian manifolds). The chapter continues with a discussion of the special properties of differentiable geodesically (strictly, strongly) convex functions, and builds on them to show global linear convergence of Riemannian gradient descent, assuming strong geodesic convexity and Lipschitz continuous gradients (via the Polyak–Łojasiewicz inequality). The chapter closes with two examples of manifolds where geodesic convexity has proved useful, namely, the positive orthant with a log-barrier metric (recovering geometric programming), and the cone of positive definite matrices with the log-Euclidean and the affine invariant Riemannian metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call