Abstract

In this work, we introduce a family of novel activation functions for deep neural networks that approximate n-ary, or n-argument, probabilistic logic. Logic has long been used to encode complex relationships between claims that are either true or false. Thus, these activation functions provide a step towards models that can efficiently encode information. Unfortunately, typical feedforward networks with elementwise activation functions cannot capture certain relationships succinctly, such as the exclusive disjunction (p xor q) and conditioned disjunction (if c then p else q). Our n-ary activation functions address this challenge by approximating belief functions (probabilistic Boolean logic) with logit representations of probability and experiments demonstrate the ability to learn arbitrary logical ground truths in a single layer. Further, by representing belief tables using a basis that associates the number of nonzero parameters with the effective arity of each belief function, we forge a concrete relationship between logical complexity and sparsity, thus opening new optimization approaches to suppress logical complexity during training. We provide a computationally efficient PyTorch implementation and test our activation functions against other logic-approximating activation functions on both traditional machine learning tasks as well as reproducing known logical relationships.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call