Abstract

Quadratic functions give good rates of approximation when used as activation functions of feedforward neural networks. Also, monotonicity is important to describe the function behavior, so the behavior of its constrained approximation. Previously, the degree of approximation by feedforward neural networks with quadratic activation function is proved to be within no less than the second order modulus of smoothness. In this paper, we discuss whether the improvement of the above estimates for Lebesgue integrable functions is possible or not. By nearly monotone approximation, it is possible to talk about a higher order modulus of smoothness, while it is not for just monotone functions. We get a nearly monotone function approximation by splitting the interval [0,1] into a partition with infinitely small lengths and then excluding intervals near the endpoints of the partition’s subintervals. However, counter examples cut hope for any more improvement outside that restricted interval. All the results are proved in the Lp-space with p < 1.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call