Abstract

The research works in approximation theory of artificial neural networks is still far from completion. To fill a gap in this issue, this study focuses on the almost everywhere approximation capabilities of single-hidden-layer feedforward double Mellin approximate identity neural networks. First, the notion of double Mellin approximate identity is introduced. Using this notion, an auxiliary theorem is proved. The auxiliary theorem provides a connection between a class of double Mellin convolution linear operators and the notion of almost everywhere convergence. This theorem is applied to prove a main theorem. The proof of the main theorem is based on the notion of epsilon-net. The main theorem shows almost everywhere approximation capability of single-hidden-layer feedforward double Mellin approximate identity neural networks in the space of almost everywhere continuous bivariate functions on $$ \mathbb {R}_{+}^{2} $$R+2. Moreover, similar results are obtained in the spaces of almost everywhere Lebesgue integrable bivariate functions on $$ \mathbb {R}_{+}^{2} $$R+2.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call