Abstract

In this paper, we consider the max-product neural network operators of the Kantorovich type based on certain linear combinations of sigmoidal and ReLU activation functions. In general, it is well-known that max-product type operators have applications in problems related to probability and fuzzy theory, involving both real and interval/set valued functions. In particular, here we face inverse approximation problems for the above family of sub-linear operators. We first establish their saturation order for a certain class of functions; i.e., we show that if a continuous and non-decreasing function f can be approximated by a rate of convergence higher than 1/n, as n goes to +∞, then f must be a constant. Furthermore, we prove a local inverse theorem of approximation; i.e., assuming that f can be approximated with a rate of convergence of 1/n, then f turns out to be a Lipschitz continuous function.

Highlights

  • Academic Editor: Xiangmin JiaoReceived: 6 December 2021Accepted: 23 December 2021Published: 25 December 2021Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.The introduction of the max-product version of families of linear approximation operators is due to Bede, Coroianu and Gal and it led to a new branch of approximation theory

  • We study the max-product form of the neural network (NN) oper( M)

  • More in general, of approximation, are related to the topic of training a neural network by sample values belonging to a certain training set: this explains the interest in studying approximation results by means of NN operators in various contexts [15,20,21,22,23,24]

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Independently of its biological meaning, in some recent papers, a new unbounded activation function has been introduced and deeply investigated. This function is the so-called rectified linear unit (ReLU) function (see, e.g., [19]), and it is defined by the positive part of x, for every x ∈ R. More in general, of approximation, are related to the topic of training a neural network by sample values belonging to a certain training set: this explains the interest in studying approximation results by means of NN operators in various contexts [15,20,21,22,23,24].

Preliminaries
The Saturation Order
Local Inverse Result
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call