Abstract

We construct and discuss a functional equation with contraction property. The solutions are real univariate polynomials. The series solving the natural fixed point iterations have immediate interpretation in terms of Neural Networks with recursive properties and controlled accuracy.

Highlights

  • Bruno Després and Matthieu Ancellin. It has been observed recently in [5, 17] that a certain generalization of the Takagi function [10] to the square function x → x2 has an interesting interpretation in terms of simple Neural Networks with the ReLU function R(x) = max(0, x) as an activation function

  • In this Note, we generalize the principle of the functional equation [9] to any real univariate polynomial x → H (x), by using techniques which are standard in numerical analysis

  • By considering the literature [8] on the current understanding of the mathematical structure of Neural Networks, the most original output of the construction is the novel functional equation with three main properties: (a) it has general polynomial solutions under the conditions of the main Theorem, (b) it is contractive, so is solved by any kind of standard fixed point procedure and, (c) the converging fixed point iterations can be implemented as reference solutions in Feedforward Deep Networks with ReLU activation function [8, Chapter 6], with controlled accuracy

Read more

Summary

Introduction

In [17], this generalization is the basis of a general theorem of approximation of functions by Neural Network architectures, see [13]. In this Note, we generalize the principle of the functional equation [9] to any real univariate polynomial x → H (x), by using techniques which are standard in numerical analysis. It is possible that a similar construction has already been considered in the immense literature on polynomials but to our knowledge, never in combination with the discussion of Neural Networks architectures

A contractive functional equation
Application to Neural Networks
A first Neural Network implementation
Accuracy
Splitting strategy
Reconfiguration of the Network
Recursive and recurrent Neural Networks
Numerical examples
Last remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call