Abstract
Artificial neural networks (ANN) have become very popular for data analysis over the past decade. In particular, feedforward neural networks have, it may be argued, become so popular because they can, as classifiers, estimate a posteriori probabilities directly by forming a mapping function from the data space to a probability space and as regressors may correctly estimate the conditional average of target data on a given input. If, however, we are to exploit the undoubted utility of ANN in safety-critical environments then classification or regression performance in itself is not enough. One of the key requirements of any statistical analysis system should be to assess its own confidence in a decision in the case of classification, and estimate probable error bars in the case of regression. Part of the problem for any Bayesian classifier is the fact that the posteriors, by definition, sum to unity. This means that a classification is made into one of a closed set of classes. If rogue data appears then, even if it fails to conform to the statistics of genuine data, it will be classified with apparent confidence into one of the output classes. We must, therefore, monitor the confidence in any classification decision. This paper looks at some of the issues in estimating errors and confidence limits in feed-forward networks, and results are presented on a simple regression problem and an example from medical diagnostics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.