Abstract

In this paper we develop an algebraic framework for analyzing neural network approximation of compositional functions, a rich class of functions that are frequently encountered in applications. The framework is developed in a way so that it supports the error analysis for not only functions as input-output relations, but also numerical algorithms. This capability is critical because it enables the analysis of neural network approximation errors for problems for which analytic solutions are not available, such as differential equations and optimal control. A set of key compositional features as well as its relationship with the complexity of neural network approximations are identified. We prove that in the approximation of functions, differential equations and optimal control, the complexity of neural networks is bounded by a polynomial function of the key features and error tolerance. The results shed light on the reason why using neural network approximations helps to avoid the curse of dimensionality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call