Abstract
It is true there have been great improvements with the effectiveness of utilizing Neural Networks. However, these improvements are, for the most part, relegated to improved clock speeds, leveraging increase in memory, and GPU enabled parallelization of up-front processing. However, what has been seemingly forgotten over the last twenty or so years is the understanding of how the internal layers are reacting with respect to convergence in training, and information transformation across layers during test, which in turn may account for a common perception that the internal neural layers are opaque black boxes. This paper will show in two parts that in fact, this is not true. Part one will demonstrate, through matrix visualization, the feed-forward processing throughout a multi-layer convolutional neural network. Part 2 will discuss our unique derivative application of Kohonen's and Kosko's correlation matrix memory methods to the consecutive pairs of layers within the network in order to form stabilized and compressible associative memory matrices. The subtlety of Part 2 is that our stabilized matrices can be simply multiplied together, thus forming a single layer, and therefore realizing The Universal Approximation Theorem of Cybenko and Hornik. In effect, the anatomy of the neural network will reveal how to open up the black box and take advantage of its inner workings.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have