Abstract

Although the introduction of deep learning has led to significant performance improvements in many machine learning applications, several recent studies have revealed that deep feedforward models are easily fooled. Fooling in effect results from overgeneralization of neural networks over regions far from the training data. To circumvent this problem this paper proposes a novel elaboration of standard neural network architectures called the competitive overcomplete output layer (COOL) neural network. Experiments demonstrate the effectiveness of COOL by visualizing the behavior of COOL networks in a low-dimensional artificial classification problem and by applying it to a high-dimensional vision domain (MNIST).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call