Abstract

<h2>Abstract</h2> Conventional deep neural network (DNN) training with an end-to-end cost function is unable to exert control on, or to provide guarantees regarding the features extracted by the layers of a DNN. Thus, despite the pervasive impact of DNNs, there remain significant concerns regarding their (lack of) interpretability and robustness. In this work, we develop a software framework in which end-to-end costs can be supplemented with costs which depend on layer-wise activations, permitting more fine-grained control of features. We apply this framework to include Hebbian/anti-Hebbian (HaH) learning in a discriminative setting, demonstrating promising gains in robustness for CIFAR10 image classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call