Abstract

In this paper we present an approach to the visualizations of the neural networks training process. The main goal of such visualizations is to better understand the training process of neural networks, and to help with the creation and optimization of new neural networks. We implemented a tool which can be used to create a large number of visualizations by using different combinations of created transformations (for example: diff, plot2, angle-path, vec-dist-dev). To be able to visualize the learning process, we had to record all the weights from every nth iteration. For experiments we used three neural networks: a neural network for digit recognition (DR), a neural network for word2vec representation of words (WV), and a neural network which learned the simple logical gate XOR, (XR). For the purpose of further comparisons, we also constructed one example of a damaged network (DR’, WV’, XR’) for each of the three original networks (DR, WV, XR). We constructed over 30 visualizations and then chose 6 visualizations which could be easily interpreted and present them in this paper. The results indicate that there may exist some regularities in the visualizations of original networks and that damaged network visualizations differ from the original networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call