Abstract

The problem of extending a function $f$ defined on a training data $\mathcal{C}$ on an unknown manifold $\mathbb{X}$ to the entire manifold and a tubular neighborhood of this manifold is considered in this paper. For $\mathbb{X}$ embedded in a high dimensional ambient Euclidean space $\mathbb{R}^D$, a deep learning algorithm is developed for finding a local coordinate system for the manifold {\bf without eigen--decomposition}, which reduces the problem to the classical problem of function approximation on a low dimensional cube. Deep nets (or multilayered neural networks) are proposed to accomplish this approximation scheme by using the training data. Our methods do not involve such optimization techniques as back--propagation, while assuring optimal (a priori) error bounds on the output in terms of the number of derivatives of the target function. In addition, these methods are universal, in that they do not require a prior knowledge of the smoothness of the target function, but adjust the accuracy of approximation locally and automatically, depending only upon the local smoothness of the target function. Our ideas are easily extended to solve both the pre--image problem and the out--of--sample extension problem, with a priori bounds on the growth of the function thus extended.

Highlights

  • Machine learning is an active sub-field of Computer Science on algorithmic development for learning and making predictions based on some given data, with a long list of applications that range from computational finance and advertisement, to information retrieval, to computer vision, to speech and handwriting recognition, and to structural healthcare and medical diagnosis

  • In Mhaskar [30], we gave an explicit construction for a neural network that achieves the accuracy of ǫ using O(ǫ−D/r) neurons arranged in a single hidden layer

  • If the smoothness r of the function increases linearly with D, as it has to in order to satisfy the condition in Barron [26], this bound is “dimension independent.”. While this is definitely unavoidable for neural networks with one hidden layer, the most natural way out is to achieve local approximation; i.e., given an input x, construct a network with a uniformly bounded number of neurons that approximates the target function with the optimal rate of approximation near the point x, preferably using the values of the function in a neighborhood of x

Read more

Summary

INTRODUCTION

Machine learning is an active sub-field of Computer Science on algorithmic development for learning and making predictions based on some given data, with a long list of applications that range from computational finance and advertisement, to information retrieval, to computer vision, to speech and handwriting recognition, and to structural healthcare and medical diagnosis. If the smoothness r of the function increases linearly with D, as it has to in order to satisfy the condition in Barron [26], this bound is “dimension independent.” While this is definitely unavoidable for neural networks with one hidden layer, the most natural way out is to achieve local approximation; i.e., given an input x, construct a network with a uniformly bounded number of neurons that approximates the target function with the optimal rate of approximation near the point x, preferably using the values of the function in a neighborhood of x.

MAIN IDEAS AND RESULTS
Outline of the Main Idea
Local Coordinate Learning
Local Basis Functions
Polynomials
B-Splines
Function Approximation
Objective
EXTENSIONS
Pre-image Problem
Out of Sample Extension
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call