Abstract

This paper describes the authors' recent research on the theories about internal workings of feed-forward artificial neural networks (ANNs). Knowledge of the inner workings of ANNs progresses with the evolution of ANN modelling techniques. In the early stages, neural networks consisted of perceptrons and were easy to interpret. Since the emergence of backpropagation learning in the 1980s, ANN models have become much more complex. Even today, many users still consider ANN models as black-box type models. In this paper, a systematic approach is demonstrated to study the behaviours of ANN models in an attempt to open up the ANN "black box". The authors emphasise that feed-forward ANN models are functions, and use a graphical interpretation technique to open the ANN models and show the detailed activities of each inner component through several case studies. These studies include how the training process changes the internal components of an ANN model, how noise impacts the training process, and how the sensitivity of ANN models is affected by the training data. Key words: artificial neural network, ANN black box, artificial neuron, connection weight, activation function, knowledge extraction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call