Abstract

Feed-forward neural networks (FFNs) have gained a lot of interest in the last decade as empirical models for their powerful representational capacity, non-parametric nature and multivariate characteristics. While these neural network models focus primarily on accurate prediction of output values, often outperforming their statistical counterparts in dealing with sparse date sets, they usually do not provide any information regarding the confidence with which they make these predictions. Since prediction limits (PLs) indicate the extent to which one can rely on predictions for making future decisions, it is of paramount importance to estimate these limits. Two empirical PL estimation methods for FFNs are presented. The two methods differ in one fundamental aspect: the method employed for modeling the properties of the neural network model residuals. While one method uses a local approximation scheme, the other utilizes a global approximation scheme. Simulation results reveal that both methods have their relative strengths and weaknesses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call