Abstract

Prototype-based learning (PbL) using a winner-take-all (WTA) network based on minimum Euclidean distance (ED-WTA) is an intuitive approach to multiclass classification. By constructing meaningful class centers, PbL provides higher interpretability and generalization than hyperplane-based learning (HbL) methods based on maximum inner product (IP-WTA) and can efficiently detect and reject samples that do not belong to any classes. In this article, we first prove the equivalence of IP-WTA and ED-WTA from a representational power perspective. Then, we show that naively using this equivalence leads to unintuitive ED-WTA networks in which the centers have high distances to data that they represent. We propose ±ED-WTA that models each neuron with two prototypes: one positive prototype, representing samples modeled by that neuron, and a negative prototype, representing the samples erroneously won by that neuron during training. We propose a novel training algorithm for the ±ED-WTA network, which cleverly switches between updating the positive and negative prototypes and is essential to the emergence of interpretable prototypes. Unexpectedly, we observed that the negative prototype of each neuron is indistinguishably similar to the positive one. The rationale behind this observation is that the training data that are mistaken for a prototype are indeed similar to it. The main finding of this article is this interpretation of the functionality of neurons as computing the difference between the distances to a positive and a negative prototype, which is in agreement with the BCM theory. Our experiments show that the proposed ±ED-WTA method constructs highly interpretable prototypes that can be successfully used for explaining the functionality of deep neural networks (DNNs), and detecting outlier and adversarial examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call