Abstract

Two methods of studying variable influence and contribution in neural network (NN) models are examined in this work. The first approach, a variable sensitivity analysis method, is based on sequential zeroing of weights (SZW) of the connection between the input variables and the first hidden layer of an established NN model. The second approach is based on systematic variation of variables (SVV) while the other variables are either kept constant or systematically varied synchronously. It is shown that there is a close resemblance between the results obtained by the proposed method for studies on variable influence and contribution in artificial NN models and the nature of the functions used to generate these synthetic data sets. The standard NN models are thus suitable not only for function approximation and nonlinear relationships, but also to a high degree able to represent the nature of the input variables. We are thus able to demonstrate that highly interconnected NN models, which are sometimes considered to be black boxes, can be highly transparent. The information generated about the variables, using the methods proposed in this work, can thus serve as a guide to the interpretation of influence, contribution, and selection. The methods proposed in this study are further compared to other sensitivity analysis methods as statistical sensitivity analysis (SSA) and β-tests. Furthermore, the methods applied to the synthetic data sets were used on three real data sets, giving, for instance, additional information on the effect of principal component (PC) regularization of input variables.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call