Abstract

Logical reasoning as performed by human mathematicians involves an intuitive under- standing of terms and formulas. This includes properties of formulas themselves as well as relations between multiple formulas. Although vital, this intuition is missing when supplying atomically encoded formulae to (neural) down-stream models.In this paper we construct continuous dense vector representations of first-order logic which preserve syntactic and semantic logical properties. The resulting neural formula embeddings encode six characteristics of logical expressions present in the training-set and further generalise to properties they have not explicitly been trained on. To facilitate training, evaluation, and comparing of embedding models we extracted and generated data sets based on TPTP’s first-order logic library. Furthermore we examine the expressiveness of our encodings by conducting toy-task as well as more practical deployment tests.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call