Abstract
Gradient descent, or negative gradient flow, is a standard technique in optimization to find the minima of functions. Many implementations of gradient descent rely on discretized versions, i.e. moving in the gradient direction for a set step size, recomputing the gradient and continuing. In this paper, we present an approach to manifold learning where gradient descent takes place in the infinite-dimensional space [Formula: see text] of smooth embeddings [Formula: see text] of a manifold [Formula: see text] into [Formula: see text]. Implementing a discretized version of gradient descent for [Formula: see text], a penalty function that scores an embedding [Formula: see text], requires estimating how far we can move in a fixed direction — the direction of one gradient step — before leaving the space of smooth embeddings. Our main result is to give an explicit lower bound for this step length in terms of the Riemannian geometry of [Formula: see text]. In particular, we consider the case when the gradient of [Formula: see text] is pointwise normal to the embedded manifold [Formula: see text]. We prove this case arises when [Formula: see text] is invariant under diffeomorphisms of [Formula: see text], a natural condition in manifold learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.