Abstract

The neural networks (NN) remain as black boxes, albeit their quite successful stories everywhere. It is mainly because they provide only the complex structure of the underlying network with a huge validation data set whenever their serendipities reveal themselves. In this paper, we propose the statistical NN learning model related to the concept of universal Turing computer for regression predictive model. Based on this model, we define ’statistically successful NN (SSNN) learning.’ This is mainly done by calculating the well-known Cramer–Rao lower bound for the averaged square error (ASE) of NN learning. Using such formal definition, we propose an ASE-based NN learning (ANL) algorithm. The ANL algorithm not only implements the Cramer–Rao lower bound successfully but also presents an effective way to figure out a complicated geometry of ASE over hyper-parameter space for NN. This enables the ANL to be free of huge validation data set. Simple numerical simulation and real data analysis are done to evaluate performance of the ANL and present how to implement it.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call