Abstract

The influence of noise levels on image classification with neural networks has been studied before. However, little is known about how different levels of entropy affect the performance of non-linear systems such as Convolutional Neural Networks (CNN), where the initial and final system states are predetermined and entropy represents a performance function. This study provides understanding on how a CNN system evolves from the original to the final state and explains the sensitive dependence on initial training conditions using the publicly available architecture and the MNIST dataset and also discusses the effects of entropy on side-scan sonar imagery. This paper describes a method of testing the effects of varying degrees of entropy on the performance of a non-linear neural network system. This approach allows the comparison of performance of the “black box” system under four states: (1) original non-altered dataset with minimal interclass variance, when a CNN trained on an original dataset is tested on images with added levels of entropy, (2) when a CNN trained on a dataset with varying levels of entropy and (3) tested to recognize the original labeled class and (4) to recognize the labeled class with varying levels of entropy. The advantage of this approach is that we can trace the performance of a single architecture CNN under varying levels of entropy, we can demonstrate the ability of the system to use noise to learn more abstract and complex features of the input space, and we can discuss the results in the light of a theoretical physical system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call