Abstract

The chapter opens with an introduction to regression and its implementation within the maximum-likelihood framework. This is followed by a general introduction to classical confidence intervals and prediction intervals. We set the scene by first considering confidence and prediction intervals based on univariate samples, and then we progress to regarding these intervals in the context of linear regression and logistic regression. Since a feed-forward neural network is a type of regression model, the concepts of confidence and prediction intervals are applicable to these networks, and we look at several techniques for doing this via maximum-likelihood estimation. An alternative to the maximum-likelihood framework is Bayesian statistics, and we examine the notions of Bayesian confidence and predictions intervals as applied to feed-forward networks. This includes a critique on Bayesian confidence intervals and classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call