Abstract

Markov Logic Networks (MLNs) are discrete generative models in the exponential family. However, specifying these rules requires considerable expertise and can pose a significant challenge. To overcome this limitation, Neural MLNs (NMLNs) have been introduced, enabling the specification of potential functions as neural networks. Thanks to the compact representation of their neural potential functions, NMLNs have shown impressive performance in modeling complex domains like molecular data. Despite the superior performance of NMLNs, their theoretical expressiveness is still equivalent to that of MLNs without quantifiers. In this paper, we propose a new class of NMLN, called Quantified NMLN, that extends the expressivity of NMLNs to the quantified setting. Furthermore, we demonstrate how to leverage the neural nature of NMLNs to employ learnable aggregation functions as quantifiers, increasing expressivity even further. We demonstrate the competitiveness of Quantified NMLNs over original NMLNs and state-of-the-art diffusion models in molecule generation experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call