Abstract

Neural network modeling often suffers the deficiency of not using a systematic way of improving classical statistical regression models. In this tutorial we exemplify the proposal of the editorial of ASTIN Bulletin 2019/1. We embed a classical generalized linear model into a neural network architecture, and we let this nested network approach explore model structure not captured by the classical generalized linear model. In addition, if the generalized linear model is already close to optimal, then the maximum likelihood estimator of the generalized linear model can be used as initialization of the fitting algorithm of the neural network. This saves computational time because we start the fitting algorithm in a reasonable parameter. As a by-product of our derivations, we present embedding layers and representation learning which often provides a more efficient treatment of categorical features within neural networks than dummy and one-hot encoding.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call