Abstract

Modern Deep Neural Networks are getting wider and deeper in their architecture design. However, with an increasing number of parameters the decision mechanisms becomes more opaque. Therefore, there is a need for understanding the structures arising in the hidden layers of deep neural networks.In this work, we present a new mathematical framework for describing the canonical polyhedral decomposition in the input space, and in addition, we introduce the notions of collapsing- and preserving patches, pertinent to understanding the forward map and the activation space they induce.The activation space can be seen as the output of a layer and, in the particular case of ReLU activations, we prove that this output has the structure of a polyhedral complex.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call