Abstract

A novel technique for the evaluation of neural network robustness against uncertainty using a nonprobabilistic approach is presented. Conventional optimization techniques were employed to train multilayer perceptron (MLP) networks, which were then probed with an uncertainty analysis using an information-gap model to quantify the network response to uncertainty in the input data. It is demonstrated that the best performing network on data with low uncertainty is not in general the optimal network on data with a higher degree of input uncertainty. Using the concepts of information-gap theory, this paper develops a theoretical framework for information-gap uncertainty applied to neural networks, and explores the practical application of the procedure to three sample cases. The first consists of a simple two-dimensional (2-D) classification network operating on a known Gaussian distribution, the second a nine-lass vibration classification problem from an aircraft wing, and the third a two-class example from a database of breast cancer incidence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.